Deep learning and embedded vision top of agenda.
There will be numerous embedded products or equipment for embedded systems on display, while MVTec Software, Adaptive Vision, Irida Labs, Stemmer Imaging, Silicon Software and Matrox Imaging will be among the companies exhibiting deep learning technology, a subset of machine learning and artificial intelligence employing neural networks.
Powerful embedded computing platforms like Arm-based system-on-chips (SoCs) are opening up new ways to deploy vision technology. 'The Arm-based SoC solutions are becoming increasingly more efficient and can now often achieve parity, especially in terms of their price-performance ratio, compared with X86 architectures which are still predominant in the industry environment,' commented Gerrit Fischer, head of product market management at Basler.
Basler will present an embedded vision development kit at the show, which combines, as a system approach, a Dart camera module with Bcon for a MIPI interface, a Snapdragon 820 Arm processor, and the Pylon camera software suite.
'Embedded vision systems will probably become established in every industry, but especially in industrial production. Switching manufacturing to new production concepts, which is being stimulated by the basic idea of Industry 4.0, will call for the use of intelligent systems,' commented Holger Wirth, vice president of R&D automation at Isra Vision.
Isra Vision will present an embedded camera for robot guidance and 3D position detection during Vision 2018. The company's embedded vision systems support standard industrial interfaces based on Ethernet, along with OPC-UA and WLAN.
Paul Maria Zalewski, from Allied Vision, said: 'Especially on the two vertical markets of factory automation and intelligent traffic systems, we are seeing a change towards embedded vision because price pressure on the manufacturers of these systems is increasing and their customers expect increasingly more compact vision systems.'
Embedded vision is also expected to open up new applications outside of factory automation, according to Christoph Wagner, product manager for embedded vision at MVTec Software.
He said: 'Many applications involving larger unit numbers are now being implemented on embedded devices, since these devices have many advantages compared with the standard PC variant, for example reduced power consumption, independence of peripherals, and lastly the price and shape factor
MVTec will release Halcon 18.11 at the show, which is compatible with 64-bit Arm platforms.
Halcon 18.11 will also have deep learning functionality, a powerful tool for certain tasks like classification, although it won't solve every inspection problem.
'The strength of deep learning lies in how its approach can take more flexible decisions than the sets of predefined rules you find in conventional machine-vision systems,' stated Volker Gimple, who heads the machine vision group at Stemmer Imaging. 'Deep learning offers an edge whenever you have test objects with large variations that make them difficult to model mathematically added Dr Klaus-Henning Noffz, managing director of Silicon Software.
Today, deep learning is being incorporated into applications where machine vision handles the classification of the test object in question. Dr Noffz offered an example from automotive manufacturing: 'With the help of deep learning, self-learning algorithms can detect every single tiny flaw in the paint--even those invisible to the naked eye.'
While a number of hurdles remain in the application of deep learning--the amount of time necessary for execution and the training of neural networks, for example--companies like Framos are confident that deep learning will dominate virtually every method of classification, such as quality assurance or sorting, in the medium term.
Dr Noffz is also a believer: 'By shifting the focus from programming to training such systems, deep learning can achieve widespread use. Tasks related to classification, for instance, are much easier to handle than with algorithmic techniques. Neural networks are especially suited to many other activities, as well, including those involving reflective surfaces, environments with inadequate lighting, moving objects, robotics and 3D.'
Exhibiting for the first time at Vision will be: Sualab, a South Korean company that plans to unveil SuaKit deep learning software for machine vision inspection; Deepsense, an AI company based in Warsaw, Poland and Portuguese inspection company Neadvance.
Sualab says that automobile companies in Japan and electronics companies in other Asian regions are using its SuaKit software.
Deepsense will present a solution for visual quality control. The software is designed to inspect objects with complex patterns, such as wood or textiles. Robert Bogucki, chief science officer at
Deepsense, sees great potential in applying deep learning to healthcare in the future.
Combining deep learning with traditional machine vision can nevertheless make sense when it comes to ensuring 100 per cent classification, as Irida Labs' Vassilis Tsagaris commented: 'It wont be long before we start seeing more and more hybrid systems. Most of the time, you need both deep learning and computer vision algorithms that have proven to be robust.'
EMBEDDED AND DEEP LEARNING PRODUCTS
Active Silicon (1H52) will present its Vision Processing Unit, an embedded computer designed for use in industrial or medical OEM applications. The product will be demonstrated running with live acquisition from four USB3 Vision cameras.
Aria, a USB3 board-level camera from Alkeria (1C26), will be introduced at the show. The camera, which weighs just 5g, has on-board image processing, I/O interfaces, and is available with a choice of image sensors including Teledyne e2v Sapphire and Ruby sensors, and Sony's Pregius IMX sensors. The cameras can reach 520fps and are suitable for embedded vision systems. A C-mount or S-mount optional lens adapter is available, and the camera offers customer-specific operation porting and onboard pre-processing.
Allied Vision (1D30) will present its new Alvium camera series. With extensive functions for image correction and optimisation, a large selection of current sensors, intelligent energy management, and cost-optimised design, the new camera series combines the advantages of classic machine vision cameras with those of embedded sensor modules. It opens up new ways for users to switch from PC-based image processing applications to embedded systems. Several live demonstrations running on different embedded boards will show Alvium cameras in action.
Basler's (1E42) embedded vision development kit consists of: the Snapdragon 820, a Qualcomm's Arm-based system-on-chip with integrated Image Signal Processor; the Basler Dart 5 megapixel camera module with Bcon for MIPI interface; and the Pylon camera software suite for Linux operating system. A live presentation, implemented with Basler's time-of-flight camera, will demonstrate cost-optimised system integration with embedded vision.
Euresys' (1H46) Open eVision Easy Deep Learning library will be shown, a convolutional neural network-based image classification library.
EVT (1A63) will be displaying its RazerCam ZLS smart line scan camera, with optional Zynq or Myriad 2 deep learning processor. The FPGA is freely programmable under Linux, available with Power over Ethernet (PoE), automatic shading correction and is capable of multi-data streaming.
Irida Labs (1116) will show its EVLib deep learning software library. The software consists of CNN models optimised to power AI on edge devices such as cameras, IoT devices and embedded CPUs. EVLib contains more than 100 pre-trained learning models. The runtime models are optimised for Arm-based CPU platforms or GPU acceleration. Library functions include: people counting, vehicle counting, soft biometrics and 2D/3D object detection and tracking.
Matrox Imaging (1E17) will be showing Matrox Design Assistant flowchart-based software, including new tools for image classification, using deep learning, and image registration using photometric stereo. These tools will illustrate inspection and OCR of hard-to-see text.
MVTec Software (1E72) will show Halcon 18.11, with new deep learning functionality and expanded embedded vision capabilities, and Merlic 4, with a deep learning OCR tool. In addition, there will be a demo of the HPeek system image for Raspberry Pi. HPeek is MVTec's license-free benchmarking demo program, which is available free-of-charge and can be used to evaluate Halcon's performance on Arm-based embedded platforms.
On display at the Phytec booth (1H67) will be the PhyBoard-Nunki embedded imaging kit, which uses the two image processing units of NXP's i.MX 6 processor. It is designed for embedded applications with multiple cameras, such as quality control applications that require a thermal imaging camera and a colour camera, or stereoscopy applications. The two camera ports of the kit can be used via five physical interfaces: two parallel PhyCam.P interfaces, two serial PhyCam-S+ interfaces, and a MIPI camera interface. The firm will also show its LMX6 processor for cost-sensitive projects, camera modules with LMX8 (quad max) processor for high power applications, and the PhyCam-M system with MIPI for industrial applications.
Sualab's SuaKit software contains: a 'continual learning' function, which uses a pre-trained model to inspect similar products, such as in PCB defect inspection; a 'multi image analysis method' for fast inspection; a 'one class learning' function, which deals with the problem of a relative lack of defect data found in industrial inspection by detecting defective products using only normal examples; and an 'uncertainty data provision' function, which shows the level of difference between normal and defect images.
Silicon Software (1C72) will present its MicroEnable 5 Marathon DeepVCL, an FPGA-programmable deep learning frame grabber.
Stemmer Imagings (1E52) CVB Polimago machine learning tool will be shown solving a number of machine learning applications across various industries. Meanwhile, an embedded vision system will show communication between Common Vision Blox and a B&R Plc. All communication between the HMI, motorised controller and the vision system is handled using the CVB OPC UA tool. This allows the vision systems to adopt a common architecture within an OPC UA open standard network and provides platform-independent and OS-independent communications for smart factory applications.
The Tulipp EU-funded project (1A74) will display technology for optimisation of machine vision algorithms for embedded systems.
|Printer friendly Cite/link Email Feedback|
|Title Annotation:||SHOW PREVIEW: VISION|
|Publication:||Imaging and Machine Vision Europe|
|Date:||Oct 1, 2018|
|Previous Article:||News from EMVA.|
|Next Article:||Inline Computational Imaging: single sensor technology for simultaneous 2D and 3D high definition inline inspection.|