Printer Friendly

2020 vision: Technology for deep learning, 3D imaging and embedded computing, along with new products for infrared imaging and illumination, are all on the horizon for the coming year.

Deep learning will continue to be a major trend for 2020, with software and camera providers putting emphasis on neural network-based offerings. One of Adaptive Vision's key focuses going into next year will be the development of a low-level deep learning framework, codenamed CoS. Only 20 to 50 samples will be needed to train the software, which will be designed to give high performance on GPUs and CPUs. Deep learning software tools are able to solve computer vision tasks through training, rather than programming.

Another important point on Adaptive Vision's roadmap is working on its graphical development environment. Using the firm's Inspector module, for example - a new edition of software designed for simplified inspections - users do not have to create the entire structure of an application, but start with a predefined template, then focus on creating the vision pipeline. The very first version of this, the minimal program view, is already available in the professional edition of the software, 4.12.

In the coming months, Sualab plans to release a new version of SuaKit for smart camera applications. The software is designed so that deep learning algorithms can be embedded on smart cameras to avoid the use of a PC.

The current version, 2.3, of SuaKit - Sualab's deep learning machine vision software library - includes an improved one-class learning feature that detects defects without being trained on images of defective items. In addition, the label noise detection function is able to detect mislabelled images automatically, to give a more obvious dataset in training and get better accuracy.

Looking toward the first half of next year, SuaKit 3.0 will be released, providing improved usability and enhanced performance, say Sualab. Going forward, the firm will be looking for new vision projects that can make use of the features of SuaKit 3.0 and the smart camera version of the software.

Euresys will expand the capabilities of its Open eVision image analysis libraries with the release of new deep learning and 3D functions.

In addition to the classification function already available in the Easy Deep Learning library, the firm is adding unsupervised defect segmentation and supervised semantic segmentation, both of which will be available by the end of the year. New object detection functionality, featuring labelled bounding boxes, will also be released early next year.

A MIPI CSI-2 receiver IP core from Euresys' subsidiary, Sensor To Image, will be available soon. It will be delivered with a reference design for fast development, and is compatible with Xilinx Artix7, Kintex7, Zynq7 and Ultrascale+ FPGAs.

Finally, Euresys will expand its CXP-12 frame grabber range with the addition of the Coaxlink Mono and Duo CXP-12. These are one- and two-connection CoaXPress 2.0 low-profile frame grabbers complementing the four-connection Coaxlink Quad CXP-12, which is already available.

Teledyne Imaging is seeing early successes from its research and development around machine learning and neural networks - the foundation of practical AI technologies that customers will be able to use across a wide range of applications.

At the same time, the company is looking at more sophisticated vision processing and embedded vision that combines LWIR, SWIR, UV and visible imaging. This will create new opportunities as the semiconductor, electronics inspection, robotics, environmental, food sorting, traffic, and healthcare markets continue to evolve over the coming year.

Barcode reading, time-of-flight, 3D image capture, and high-speed vision will continue to be a focus for the machine vision teams at Teledyne Imaging. The firm will continue to develop advanced software to nurture markets where its customers are looking for intelligence to improve their businesses.

The market for fast vision interfaces, such as CoaXPress, 10 GigE and Camera Link HS, is spreading beyond factory automation to many non-industrial sectors such as medicine, intelligent transportation systems and logistics, according to Silicon Software.

The firm also anticipates increased demand for complete solutions in the future, and is therefore developing corresponding business models to address this. In addition to suitable hardware, such complete solutions can consist of deep learning, 3D, or hyperspectral/multispectral applications paired with powerful and easy-to-use software.

The combination of classical algorithms with those based on artificial intelligence will create scope for better, more cost-effective or more complex applications. Inexpensive embedded vision technology is being used, which will expand the application focus to additional industries and fields of application.

Flir will develop its deep learning hardware in its Firefly DL camera. The camera contains Flir Neuro technology that allows developers to deploy their trained neural networks directly onto the camera, reducing system cost and complexity by making decisions on-camera without a host PC. With its very small size, low weight and power consumption, the Firefly DL camera is ideal for embedding into mobile, desktop, and handheld systems.

Following the launch of its IDS NXT Rio and Rome cameras, which are able to perform AI-based vision tasks directly 'on the edge', IDS Image Development Systems intends to extend the capabilities of its imaging portfolio further in the coming year.

One of the firm's main focuses will be to offer an easy-to-use industrial AI camera solution that won't require any programming knowledge to use. This includes the development of a smart GenICam vision app that will allow users to upload camera features they have created to IDS NXT devices, and use them as if they were part of the standard IDS NXT camera functionality.

With 3D camera technology playing an important role in automation tasks, such as bin picking and quality control, IDS also intends to extend its Ensenso product range, further incorporating on-board data processing to reduce PC overheads. In addition, the firm will extend its portfolio of uEye cameras with new sensors and new functionality, and add more than 100 new USB3 Vision models to its CP and SE camera families.

Photoneo introduced a number of new products and solutions throughout 2019, including a solution for automated 3D model creation, an automated mobile robot (AMR) named Phollower 100, and updates to its PhoXi Control software and Bin Picking Studio.

Upcoming is the release of the MotionCam-3D, a high-resolution and high-accuracy 3D camera. Photoneo also started several interesting projects and collaborations with customers this year, which will see development throughout 2020.

Sick will develop a variety of application-specific cameras based on the programmable 2D and 3D vision sensor portfolio, in combination with the Sick AppSpace ecosystem. These will have a dedicated interface, ready to solve a specific task, for example fine positioning of stacker cranes, robot guidance for picking applications, quality inspection of printed labels, presence inspection in consumer goods and quality control of connectors in electronics.

Moreover, a new concept will be introduced that allows programming customers to plug-in their own tools directly into a configurable vision sensor and extend its functionality. Sick will also add competitive entry-level models with improved resolution and performance to the 2D and 3D vision portfolio. In particular, new 3D camera models will be launched next year based on Sick's ROCC technology.

Lucid Vision Labs plans to expand its Atlas family with more sensor choices and incorporate the higher speeds of NBase-T. Its current 5GigE Atlas camera offers faster frame rates, small size and excellent price-performance. It features a TFL-lens mount (M35) which supports large format sensors such as the 31.4 megapixel Sony Pregius IMX342 global shutter CMOS. The camera line is designed for applications requiring high bandwidth over 5GBase-T PoE.

By the end of the year, Lucid also plans to offer a time-of-flight camera module on an embedded board with MIPI output using the Nvidia Jetson TX2 and its own processing library. The firm's current compact time-of-flight camera, Helios, based on Sony's DepthSense IMX556 sensor, provides high 3D depth precision and industrial reliability at an attractive price.

Both product lines will have expanded feature sets, such as support for multi-camera configurations.

Basler will continue to broaden its portfolio with new lighting solutions, cost-effective lenses and several new camera models.

For example, the firm will enlarge its Ace 2 camera portfolio and launch several models with the latest sensors, as well as expand the products in its Boost series based on CoaXPress 2.0. This standard enables higher resolution, faster frame rates and interaction with new frame grabbers, also with deep learning functionality.

In the 3D segment, the Basler Blaze time-of-flight (ToF) camera will deliver high-precision, cost-efficient imaging in real time next year, making ToF technology accessible for the mainstream market. Lastly, the firm's embedded vision solutions portfolio will also be expanded by additional camera modules and combinations with various processors.

Over the coming year Active Silicon plans to expand its Harrier family and Firebird frame grabber range, in addition to developing its embedded system offerings - which continue to provide robust and flexible options for medical imaging, manufacturing and defence applications.

The Harrier camera interface boards support high-definition, real-time video transmission across long cable lengths and multiple slip rings. Ideal for pipeline inspection, surveillance and industrial vision in general, Harrier 3G-SDI boards offer an interface solution supporting HD-VLC technology. These boards fit on compact autofocus zoom cameras. Active Silicon will also launch USB and IP interface boards in the coming months.

Imago Technologies will launch new developments in image processing hardware. Event-based vision - technology whereby pixel-based events are generated by brightness changes - will be enhanced on both the hardware and software side. The company will launch new models to its VisionCam EB event-based camera, with the latest sensors and additional functionality to transfer events into results.

Imago will also add more platforms to its VisionBox computer with the Tegra TX2 GPU. The new architectures are also complemented by trigger-over-Ethernet and the real-time controller RTCC, for a perfect connection of cameras with periphery devices.

With the recent launch of four three-CMOS prism area scan colour cameras dedicated to microscopy-based systems, JAI has shown its future commitment to supporting the need for high-quality imaging in the medical and life sciences market. In the months to come, the firm will be focusing on applications in digital pathology, ophthalmology, surgical imaging and other medical diagnostics applications incorporating digital camera technology.

JAI's near future product roadmap contains new trilinear and three-CMOS RGB prism-based colour line scan cameras with SFP+ interfaces, including a four-CMOS prism-based RGB/NIR line scan camera also equipped with an SFP+ interface.

In the plans are also two new multispectral area scan cameras with 3.2-megapixel and 1.6-megapixel resolution, both equipped with 10GigE Vision interfaces. These new Fusion series cameras will feature dual-sensor CMOS technology that simultaneously performs both visible and near-infrared inspection for use in applications such as food sorting, print inspection, and electronics and flat panel inspection.

Sony Image Sensing Solutions Europe says that next year will be a key year for polarised camera technologies, with significant growth in Intelligent Transport Systems (ITS) and quality inspection. For industrial applications, polarised cameras allow reflective surfaces to be captured or seen through, and can identify and quantify otherwise invisible weaknesses in a product.

In ITS applications there will be lifesaving benefits. German government data suggest around 66 per cent of driving prosecutions stall when the driver can't be identified. Rules around drivers using mobile phones, or not wearing a seatbelt, are hard to police as glare restricts the ability to see into the car.

Using polarised modules comes with a learning curve for system developers, with new skills needed. SDKs and application libraries will shrink this curve greatly.

In 2019, Sony launched an SDK for polarised cameras, the XPL-SDKW. Depending on the application, the SDK can cut the typical polarised camera system design time from 6 to 24 months, to just 6 to 12 weeks.

Ametek Surface Vision will introduce new functionality in its SmartView and SmartAdvisor surface inspection and monitoring solutions. In the SmartView system, this will include a more intuitive user interface; parts-per-million data calculations that make it easier to visualise defect density; and developments to SmartView's defect detection and parallel classification capabilities - including detecting quiet areas on coated products, removing defects that match a target shape, and enhanced repeating defect algorithms.

SmartView is a materials inspection solution for the metals, paper, plastics and non-wovens industries. It is an automated solution that detects, classifies and visualises surface defects, ensuring quality standards are maintained.

Next year, IO Industries will expand its camera offerings while finishing the roll-out of two recently announced camera models.

The first is a compact CoaXPress-output industrial camera with a 26.2-megapixel Gpixel GMAX0505 sensor, the Victorem 262G41-CX. The second is the Volucam, a family of cameras with Sony and Gpixel sensors, featuring raw video recording to a built-in solid-state drive. These cameras are designed for multi-camera synchronised video recording applications, such as volumetric video capture. The Volucam has been a stealth project at IO Industries for more than a year, and its release will greatly affect the growing market of volumetric video capture studios, which can use 100 cameras at a time.

Laser Components will look to update its Albalux FM white light module. With a fibre-guided continuous-wave luminous flux of 150 lumens, the white light laser module allows precise, high-contrast illumination, even in areas that are difficult to access. This opens up new possibilities in endoscopy, surgical headlamps and 3D image processing.

In addition to brightness, it has precise beam guidance and sharp beam edges. The compact housing of the plug-and-play module contains specially developed electronics for safe control of the light source. It has low power consumption.

The light source is the innovative laser light technology from SLD Laser. Two semi-polar blue GaN laser diodes (450nm) illuminate a phosphorus chip, producing a brilliant, incoherent white light that is more than ten times brighter than the brightest white LEDs available.

The Albalux FM with fibre output is the first model in a comprehensive range of products.

A statement from Dr Olaf Munkelt, managing director of MVTec Software: 'In the first six months of 2020, MVTec will continue addressing the most important topics of machine vision with new releases of the Halcon and Merlic software products, as well as its deep learning tool. These topics primarily include technologies such as 3D vision, deep learning, and embedded vision, with each becoming more important in the context of the Industrial Internet of Things, or Industry 4.0.

'MVTec's software releases will enable users to apply AI-based technologies, such as deep learning, on a whole new level. MVTec will optimise its standard software products with regard to embedded vision applications.'

Pleora is focusing on two market areas for the next year. In security and defence, the firm will integrate its real-time sensor networking technology with AI and machine learning capabilities to further improve local situational awareness, battlefield intelligence, and decision-making support.

In the industrial automation market, the firm will focus on technologies to help bring AI and machine learning into inspection systems with smart devices that process and share data and make decisions at the edge of the inspection network. These include smart frame grabbers that integrate directly into an existing inspection network, and software-enabled embedded processing boards that operate as virtual GigE sensors to create a seamlessly integrated network.

A statement from Mark Williamson, managing director of Stemmer Imaging UK: 'By introducing application-specific subsystems that can be used in a number of different markets, in 2020 Stemmer Imaging will be providing system integrators with tools to save time and reduce risk when addressing end-user applications. The first of these will be the InPicker 3D bin picking system. This can use a variety of 3D imaging techniques to recognise and determine the position of even complex objects with very irregular shapes and multiple structures that are randomly distributed in a container and directly interface with a variety of robots.

'Stemmer Imaging's core competencies of the supply of both machine vision components and bespoke imaging subsystems for OEMs will also be complemented through providing subsystems for wider vertical markets.

'Using its technical expertise, Stemmer Imaging adds value in every aspect of its business. Vision component sales are frequently backed up with additional services, including system design.'

Matrix Vision is going to expand its cost-effective and compact board-level camera series for embedded vision in 2020.

The first product in this series - the MvBlueFox3-3M-064Z - offers an outstanding price-performance ratio together with the Starvis IMX178. The excellent image quality makes the 6.4-megapixel rolling shutter sensor ideal for a wide variety of applications, for example in medicine and microscopy, while its high speed offers interesting potential for traffic and industrial applications.

In addition, the firm's upcoming MvBlueFox3-5M cameras will apply the same hardware concept for the complete range of already-integrated global shutter Pregius sensors from Sony. All camera models will be equipped with the board-to-board BF3 embedded interface, which offers a flexible concept, enabling individualised adaptations to standard USB3 connectors and cables, or direct connection to CPU or GPU embedded vision platforms.

Matrix Vision also plans to release the MvBlueCougar-X-IP67C series, an upgrade for its established MvBlueCougar-X GigE cameras, with a cost-effective IP67 protective housing for a wide range of applications in harsh environments. Waterproofed lenses will be available to eliminate the need for additional protective lens tubes.

Emberion's infrared detector technology and 2018 product introduction focused on a VIS-SWIR (400-1,800nm) linear array solution intended for spectroscopy applications. In 2020, the linear array product will be market ready, and engineering samples of a VIS-SWIR VGA imager product will be launched.

Currently, Emberion is developing VIS-SWIR sensor technology in order to offer products with an extended spectral range reaching 2,000nm. Over a longer timeframe, however, the firm aims to deliver a novel product which would be a significant step towards achieving an ultra-broadband image sensor covering wavelengths from VIS-SWIR to LWIR. For this concept, Emberion is investigating a non-cryogenically cooled MWIR sensor technology, offering performance similar to the cooled technologies of today.

CCS will be working on precision illumination, computational imaging, hyperspectral and multispectral imaging, and pattern projecting.

The firm sees great potential in Industry 4.0, with its Fastus series of intelligent lighting capable of providing data on light use to inform on maintenance, performance and other issues. This enables manufacturers to optimise their operations.

With increasing demand for easy integration of machine vision components, CCS is also committed to expanding its plug-and-play lighting and open-architecture solutions, so that lights can be easily synchronised with other machine vision components for immediate operation.
COPYRIGHT 2019 Europa Science, Ltd.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2019 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Roadmap 2019/20
Publication:Imaging and Machine Vision Europe
Date:Dec 10, 2019
Previous Article:OPC UA vision interface to ease smart factory integration: Suprateek Banerjee, VDMA's robotics and automation standards manager, updates on the OPC...
Next Article:Suppliers' directory: A comprehensive list of suppliers of vision-related products and services.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters