Hacker News new | past | comments | ask | show | jobs | submit login
Launch HN: Cerrion (YC S22) – Computer vision to reduce production line problems (cerrion.com)
80 points by michaelgygli on Aug 18, 2022 | hide | past | favorite | 24 comments
Hey HN! Michael here, computer vision researcher turned co-founder of Cerrion (https://www.cerrion.com/). I’m here with my co-founders Karim and Nikolay.

Cerrion helps manufacturers automatically detect problems on their production lines using computer vision. You can see this in action here, for detecting issues on conveyor belts and in glass bottle production: https://youtu.be/DuSN-qJcoNQ

It’s estimated that undetected problems on production lines cost the manufacturing industry $1 trillion in lost production time per year. This is because staying on top of your production is hard and works best with trained and experienced eyes. We are working on making this easier, by automating production line monitoring with computer vision.

The basic idea is simple: our product learns how a manufacturing process looks when things are going well, then can detect and track anomalies and other problems in real time.

This has several major benefits: (1) it allows detecting subtle issues on the production line before they become big and costly, (2) it reduces the need for human monitoring, and (3) it facilitates root cause analysis remotely in a matter of minutes by showing video data of the problem(s).

We came to work on this because Nikolay previously co-founded Assaia (https://assaia.com/) and we learnt how messy and intransparent ground operations at airports are. We quickly realized that manufacturing companies suffer from similar pain points, given that manufacturing processes are highly complex. Thus, we started talking to manufacturing companies and soon recognized that computer vision could significantly increase their process transparency and thus help them better run their production lines.

We have built a video analysis pipeline using a dockerized Python stack. The pipeline processes RTSP video streams and analyzes them in real-time, using a Convolutional Neural Network, making predictions for what goes wrong where in the production process. We aggregate these predictions into events, push them to a Kafka queue and serve them back to the customer. We do this via a real-time alerting system and a detection library. The real-time alerting allows customers to take actions immediately. The detection library offers an analytics dashboard, as well as videos of the relevant problems. With this, our customers can find systematic production issues and do root-cause analysis.

The manufacturing landscape is heterogeneous and production processes are constantly changing. To be able to serve all kinds of industries, we need an adaptable product. To get there, we are working hard to make our product plug-and-play—essentially, to get to the point where it fully automatically learns how a manufacturing process looks when things are going well and automatically detect deviations based on this. In practice this means we need to build a performant model using transfer learning and self-supervision; and automatically adapt and keep it up to date from just a handful of user inputs (for which we use active learning).

BTW, we pay our bills by charging a SaaS license per production line.

Thanks for reading! We are curious to hear your thoughts!




This sounds like a very interesting area! I guess you are the "Statistical Process Monitoring"/"Control charts"/"Shewhart charts" [0] for images. Very cool!

Is this correct or is your solution totally different? In what aspect is it most similar and most different from "Control charts"?

Are there any keywords for interested hackernews readers to research this further and play with this concept? Is it correct that you do "just" outlier detection on the embeddings of the images? I guess it works something like this:

1) Image --CNN--> Embedding: maybe enforce (properties) of distribution on the embedding (something like VAE)

2) Approximate this distribution and call a (sequence of) images an outlier if its likelihood is small. Alternatively, compare the empirical distribution of a few collected images to a distribution of "good images", e.g. via embedding into RKHS.

What type of anomalies can be detected? Does in evaluate each image separately (i.e. it cannot differentiate between objects going from left to right) or does it "understand" short sequences of images? The latter sound even more interesting. Could you provide some keywords for it.

On the production line, there are already cameras and computer vision products, e.g. Halcon. These can be used to "drag/drop" a computer vision pipeline together. Could your software be integrated into it such that the output can be further processed in Halcon etc. ?

[0]: https://en.wikipedia.org/wiki/Control_chart


For anomaly detection, does your team have to worry about concept drift and data distribution shifts? If so, how do you combat that?


Great question. Yes, that's a big topic for us. We closely monitor production metrics to detect things like input changes and output drift. This allows us to automatically detect if something changes, so that we can trigger retraining and redeployment.


We built a lot of this in-house though, as we didn't find suitable existing tooling for that


How does this differ from instrumental? https://instrumental.com/


Instrumental focus on detecting defect in already assembled/finished products we focus on detecting anomalies in the production process itself i.e we are positioned earlier in the value chain and use video and not images like Instrumental to detect anomalies


Would be curious if you are able to integrate metrology products that generate point clouds vs video feeds. I have a buddy that is in that space for automotive manufacturing, they use a variety of laser scanning products for production line setup/calibration/etc but a runtime monitoring system that can detect small deviations might be an entirely new revenue stream.


We're not doing anything which is not video-based at the moment as we rely on versatility of optical sensors. However you are totally right regarding the value of precise point cloud generation — definitely something to look at in the future.


My first question is how to pronounce assaia.

Second: I had an impression that production hardware is air-gapped. I assume it’s not really the case?


haha, asking the real questions re Assaia. The "A" is as in "another".

They are not usually air-gapped, but there is often tight security in place. But it doesn't matter for us, as we sell a standalone solution and don't need access to the machines (for now).


Nice product. May I ask the differentiator with commercial packages for industrial vision like Halcon or Cognex?


Thank you very much. Current industrial vision solutions like Halcon or Cognex focus on quality control i.e they inspect products for defects while we focus on process control where we look for anomalies in the process itself i.e in the material flow, material handling etc.


This is a neat concept. I have no clue what this costs to setup, since its likely got a big setup time expense (training each site), but looks cool.

Are there other areas you've noticed that benefit from computer vision to compare base vs deviations?


Great point, so key here given the variability in manufacturing processes is to have a plug & play setup. That's why we are working on getting our setup time down to just a few hours, through focusing on unsupervised model adaptation.

Re other areas: Same concept can be applied to anywhere where fast reaction times to deviations are essential and deviations are visually distinct. Think loading docs, where you want to make sure that cargo is loaded in time or machine assembly where you want to avoid delays. I think the list of areas where such an approach would add value is quite long.


I'd suggest not doing this.

What you're doing is not novel. You have a much better chance if you focus e.g. food and beverage.

There's many reasons but your setup will go smoother, GTM will be easier and you will get higher NPS. You will also have the opportunity to develop novel aspects.


We are indeed focusing - my comments regarding other areas where my personal thoughts on the above question and not Cerrion's strategy.


I was going to say this seems best as a packaged solution + consulting/setup fee. I could see this working in fast-food assembly stations as well, but may have more variance with human element.

Also could add on an analysis component of "study my flow and quantify potential jam points".


We are more in the mode of "deploy cerrion at your most critical production vantage points and we tell you when, why and where deviations happen". This mode doesn't require any consulting, because that every manufacturer knows their critical points and our tech doesn't need manual customization. We will focus on processes where this is always applicable and consulting is not needed to be able to achieve fast growth.

The analysis of quantifying potential jams points and surfacing targeted improvement areas is smth we are already working on given that this can be done automatically in the customers dashboard by running some correlation analysis on our detections


Congratulations on your launch.

I think you've done a really great job with your landing page and copy.


Thanks you for the kind words, all praise goes to Karim


Have you tried this in any really messy environments like a lumber mill? Those highly variable natural inputs feel like they’d prove challenging to find anomalies in vs eg a bottle filling line


What camera hardware are you using? https://www.luxonis.com/ might be an interesting option.


We use off-the-shelf cameras (we just need an RTSP stream), but thanks for the input, we'll definitely check them out!


Is your image processing FOSS?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: