Vision Vitals

Embedded Vision: Past, Present, and Future with Suresh Madhu

September 13, 2023 e-con Systems Episode 1
Embedded Vision: Past, Present, and Future with Suresh Madhu
Vision Vitals
More Info
Vision Vitals
Embedded Vision: Past, Present, and Future with Suresh Madhu
Sep 13, 2023 Episode 1
e-con Systems

Uncover the mysterious intricacies of embedded vision technology as we sit down with Suresh Madhu, Head of Product Marketing at Econ Systems. Suresh is a seasoned vet with more than 16 years of experience in embedded product design and product development. Through our engaging conversation, Suresh throws light on the evolution of this technology, from the humble beginnings with mobile phone cameras to the development of applications as sophisticated as autonomous vehicles. Discover how cutting-edge trends like edge computing and sensor fusion are shaping industries and enhancing the accuracy and sophistication of applications.

As we delve deeper, we get to explore a wide array of topical issues. Suresh shares his insights on the unique challenges industries face when implementing embedded vision solutions, from cost considerations to the complexities of integrating multiple sensors. He also walks us through the transformation of autonomous mobile robots and the future of autonomous vehicles and agriculture vehicles. Strap in for a fascinating journey as we uncover the past, present, and future of embedded vision technology with Suresh Madhu. This episode is sure to be a treasure trove of insights for tech enthusiasts and vision tech experts alike.

Show Notes Transcript

Uncover the mysterious intricacies of embedded vision technology as we sit down with Suresh Madhu, Head of Product Marketing at Econ Systems. Suresh is a seasoned vet with more than 16 years of experience in embedded product design and product development. Through our engaging conversation, Suresh throws light on the evolution of this technology, from the humble beginnings with mobile phone cameras to the development of applications as sophisticated as autonomous vehicles. Discover how cutting-edge trends like edge computing and sensor fusion are shaping industries and enhancing the accuracy and sophistication of applications.

As we delve deeper, we get to explore a wide array of topical issues. Suresh shares his insights on the unique challenges industries face when implementing embedded vision solutions, from cost considerations to the complexities of integrating multiple sensors. He also walks us through the transformation of autonomous mobile robots and the future of autonomous vehicles and agriculture vehicles. Strap in for a fascinating journey as we uncover the past, present, and future of embedded vision technology with Suresh Madhu. This episode is sure to be a treasure trove of insights for tech enthusiasts and vision tech experts alike.

Suganthi Sugumaran:

Welcome to Vision Vitals, a series of embedded vision podcasts powered by Econ Systems. This is Suganthi, your host for today. I'm the Director of Marketing here. We're happy to explore the evolution and latest trends in embedded vision technology through this series. In this first episode, I'll be talking to Suresh Madhu, head of product marketing at Econ Systems. With 16 plus years of experience in embedded product design system on module camera solutions and product development, he has played an integral part in helping many customers build their products by integrating the right vision technology into them. We will be discussing how embedded vision technology has evolved over the years, as we get to hear his valuable insights on a broad range of topics. So let's get started. Hi Madhu, welcome to our podcast.

Suresh Madhu:

Hi Suganthi, thank you for having me in this show.

Suganthi Sugumaran:

Sure Madhu, let's kick start by looking at the past. How do you think embedded vision technology has evolved over the years?

Suresh Madhu:

Fundamentally, embedded vision technology can be traced back to the 1970s, but their growth as cost-efficient imaging solutions was revealed during the advent of mobile phone cameras. Mobile phone cameras played a crucial role in unlocking the potential of cost-efficient embedded vision solutions. With billions of smartphones in use globally, their ubiquity allowed for the widespread adoption of embedded vision technology. Also, in the early 2000s, the development of digital signal processors and field programmable gate arrays started, enabling sophisticated image processing algorithms, and this triggered the development of early embedded vision applications in areas such as surveillance and automotive safety. The mid-2000s saw the emergence of low-cost, low-power processors such as ARM-based CPUs and GPUs. This made it possible to implement more complex algorithms on embedded systems, causing embedded vision applications to completely transform areas such as robotics and consumer electronics. In recent years, the development of deep-learning algorithms and the availability of large data sets have enabled significant advances. For example, they have helped cameras to recognize and classify objects with high accuracy, even in complex environments. So most sophisticated applications, such as autonomous vehicles, have taken several industries based on.

Suresh Madhu:

Furthermore, the evolution of camera interfaces has witnessed a significant progression. In the unlock era, cameras glade on unlock interfaces, transmitting signals in continuous waveforms. With the advent of digital cameras, parallel interfaces emerged, allowing for faster data transfer and improved image quality. Mipi standards gained prominence as technology advanced, enabling high speed and low-power communication. In recent years, the emergence of long-range high band with interfaces such as GMSL equity link has transformed camera interfaces in automotive and industrial applications. These interfaces leverage high-speed serial communications to transmit uncompressed video and data over long distances for robust performance in challenging environments. Overall, embedded screen technology has come a long way and its evolution is set to continue.

Suganthi Sugumaran:

Indeed, it's a long journey, Madhu. What about the latest trends we are seeing in the industry?

Suresh Madhu:

First, we have edge computing quickly becoming a significant trend in embedded vision technology. It involves processing data at the edge itself, which reduces the latency and improves efficiency. Edge computing also enhances security by reducing the need for the data to be sent to central servers or the cloud. Also, the continuous optimization of deep learning algorithms enables systems to recognize patterns and make decisions based on complex data on the edge itself. Thanks to this edge computing real-time processing capabilities and advances hardware and algorithms. Large amounts of data can be processed on edge systems, powering applications such as autonomous vehicles, robotics and autonomous shopping. On this note, it's crucial to reflect on how, just five years back, cloud computing was at its peak. It offered advantages such as scalability, accessibility and cost efficiency, making it a preferred choice for many. However, with the evolution of processes, edge computing became inevitable. As processes became more advanced, the concept of edge computing gained prominence.

Suresh Madhu:

This approach has been majorly helpful in reducing latency and addressing bandwidth limitations. Then there is sensor fusion. It refers to integrating different types of sensors, providing a comprehensive view of the environment. This approach can improve accuracy and enable very sophisticated applications, for example, in delivery robots. 3d LiDAR for mapping and navigation. You will need 2D HDR camera for surround view and IMU sensors to detect ups and downs to make sure that delivery robots navigate seamlessly. This is the same for autonomous vehicles, track test and agriculture vehicles.

Suganthi Sugumaran:

Wow, that sounds interesting. Now let's move on to the next question. I am sure Madhu industries are facing massive challenges while implementing embedded vision solutions. You are talking to many customers, you are handling many projects. Can you elaborate on the challenges they are facing?

Suresh Madhu:

Embedded vision solutions have become a game changer in many sectors, from AMRs and smart forming to life sense and transportation.

Suresh Madhu:

After all, they enable missing to perceive and understand the world around them. However, with these benefits come several challenges that such industries must overcome to leverage the full potential of these solutions. Cost is the first challenge that several industries face, especially for high volume applications. The logic is that when volume increases, products are shall be cheaper. Then there is a huge challenge of using multiple sensors like 3D sensors, 2D cameras, IMU sensors, etc. Which are not only integrated but also synchronized to improve accuracy. Also, processing visual data in real time with minimal latency and delay can be a real challenge. To address this, specialized hardware and software are optimized for real-time processing, with algorithms and models to balance accuracy and speed. Choosing the right processing platform is a challenge as well. Often, customers get confused about the different options available in the market, so it is important to consider factors such as processing power, low power consumption and form factor, etc. However, that being said, they would need an expert to always choose the right one.

Suganthi Sugumaran:

True Madhu, guidance from an expert definitely helps. Having said that, can you tell us specifically about the challenges faced by autonomous mobile robots?

Suresh Madhu:

The embedded vision in autonomous mobile robots comes with its own set of challenges. For starters, many in the industry still rely on 3D lighters, since it's based with mapping for localization and navigation. Even though these light-up-based AMRs have precise navigation, they're not budget friendly, especially regarding scalability. To meet the surge in mass production of AMRs, the cost of the AMRs will be low, which is not feasible if we continue using light-up-based AMRs. So it's vital to consider either low-cost 3D solutions or the best possible stereo solutions based on 2D cameras. But stereo or low-cost 3D solutions, such as time-of-lit solutions, are not solid proof. They're not solid proof, so we need to consider the best possible stereo solutions based on the time-of-lit to give accurate data in different environmental conditions, like low light, sunlight reflections and more so. Developing low-cost 3D solutions to replace 3D LiDAR system is going to be the key for transforming the AMRs future.

Suganthi Sugumaran:

Yes, madhu. Sometime back I heard that sidewalk delivery robots have been banned in California. So what exactly does the future of AMR holds? Where is the industry heading to, in your opinion?

Suresh Madhu:

This is an exciting time, Suganthi, as it is difficult to accurately predict which will pick up momentum.

Suresh Madhu:

After all, a few years ago, many would have assumed autonomous vehicles will be widely adopted, but that certainly hasn't the case yet. There have been many challenges, from safety concerns and regulatory frameworks to cost. These are also finding it difficult to find robust and reliable communication systems to facilitate vehicle-to-vehicle and vehicle-to-infrastructure interactions. However, what seems to have already picked up momentum is remote driving. Remote driving, where a human operator controls a vehicle from a remote location, has gained more popularity compared to fully autonomous mobile robots, for a few reasons. First, remote driving allows for real-time decision-making and adaptability in complex and dynamic environments where AMRs may face limitations in perception and navigation. Second, it offers an intuitive interface for operators leveraging human expertise and judgment in situations that repair nuanced decision-making. Third, the existing infrastructure, like telecommunication networks in general, can support remote driving applications more readily than fully autonomous systems.

Suganthi Sugumaran:

Got it. Madhu. Can you also give us a picture of the challenges faced in agriculture vehicles and autonomous vehicles?

Suresh Madhu:

From an embedded vision perspective, one of the biggest challenges is that a good camera that brings out the best HDR possible is still not audible in the market. The tough operating conditions under diverse environments encoded by agriculture vehicles and the AVs present technical hurdles that need to be overcome for optimal HDR performance For both agriculture vehicles and autonomous vehicles. Integrating sensor fusion is still a challenge due to dynamic environments such as diverse and unpredictable terrains, including muddy fields, slopes and uneven surfaces. Then there is this transition for these applications to adopt automotive standards, even though they don't belong to the automobile industry. Robustness against thermal vibrations and shocks, which is currently a challenge as there are not many cameras built to withstand these conditions while performing reliably and with longevity.

Suganthi Sugumaran:

Madhu, that is indeed a difficult one to adopt. Moving on from industrial pre-tale is one that is seeing a lot of vision based innovations like smart cards, self-checkables, etc. Can you address briefly about this system?

Suresh Madhu:

Smart cards are equipped with the right set of features to automatically detect and determine objects for seamless checkout process. The biggest challenges in these smart cards is identifying the same products of different sizes. For instance, a Pepsi bottle of 1 or even 2 liter is identified as the same by the deep learning algorithms, since the size of the bottle can't be determined by 2D cameras. So people are trying to use 3D cameras. Here comes 3D cameras to identify the same product of different sizes. But the reason why 3D cameras are not widely used is that including these technologies is likely to add to the overall cost and complexity of the product. Retail index is not going to scale and sustain with such an expensive solution. Other challenges include ensuring real-time object detection. This requires developing algorithms capable of handling varying lighting conditions and environments.

Suganthi Sugumaran:

Madhu, I think power of inefficiency is also a big problem. Enabling extended operations without frequent recharging is definitely a challenge.

Suresh Madhu:

Yes. Suganthi added to that maintaining data privacy and security is essential when dealing with visual information, as these systems may capture sensitive data. Using secure and privacy preserving mechanisms adds complexity to building the embedded vision systems. To sum up, these are the ongoing transformations around the industry. As the AI and ML algorithms evolve, the future of embedded vision technology holds immense promising future. Industries are already working towards overcoming these challenges related to cost integration and real-time processing.

Suganthi Sugumaran:

That's great. It has been such a pleasure talking to you. Thank you, madhu, for your valuable insights. Listeners, I believe you have also benefited from this conversation. We would love to hear your feedback. Also, if you have any specific topic that you would want us to discuss, reach out to us at camera solutions, at econ systemscom. Have a good day, folks. Thank you Bye.