Self-Driving Cars: Our Future as Passengers

However long it takes to fully gain traction, the age of autonomous cars will be hereCar illustration with sensors before we know it. Those in the know see it as the inevitable future of transportation, designed as moving living rooms. Given the numerous high-profile companies developing self-driving technologies, as well as the hundreds of other small companies and start-ups dedicating themselves to enabling this industry through connectivity, sensors, and other products (see this chart), we are well on our way to becoming full-time passengers.

And that’s not all bad. As such, people can potentially create a “passenger economy,” identified by Intel as a boon to productivity worth $7 trillion, because everyone will be able to use their travel time more efficiently, working instead of driving, and the industry will spur new markets. Autonomous cars are also seen as a safer choice, with fewer potential accidents than human-controlled cars.

While critics say that the widespread adoption of self-driving cars may cause the loss of jobs (including drivers of taxis, long-haul trucks, and delivery services), which will eventually be true, the new types of jobs created in their wake might make up for that loss. Critics also note that there has already been resistance from potential buyers due to concerns for privacy and security, as well as their hesitation to trust a new technology.

But the question remains, will most customers want a self-driving car? Or do people love driving enough that they will continue to want control of their own vehicle? Mercedes-Benz recently posted an article regarding how autonomous cars will kill the joy of driving, but conceded that perhaps it will be a small price to pay for better safety on the roads.  They also noted that perhaps, as your self-driving car controls itself, you might use your newfound freedom to, ironically (and somewhat hilariously), play a virtual reality car racing game inside of it.

Hyperimaging: Superhero Vision

Out of five big innovations that IBM Research predicts will change our lives in the next five years, one in particular caught our eye, since it might just require some of our precision optics: hyperimaging technology.  Here is an introduction to this burgeoning optoelectronics opportunity.

“More than 99.9 percent of the electromagnetic spectrum cannot be observed by the naked eye. Over the last 100 years, scientists have built instruments that can emit and sense energy at different wavelengths.”                                                                                                                                                       – IBM Research

supermanHyperimaging technology is special because it will help us to see beyond visible light by combining multiple bands of the electromagnetic spectrum to add to what is visible; in other words, it will allow us to see qualities beyond what is normally visible, perhaps into the realm of Superman-type seeing.

Existing tools can illuminate objects and opaque environmental conditions using different frequencies of the electromagnetic spectrum such as radio waves, microwaves, millimeter waves, infrared and x-rays, and reflect them back to us. However, these instruments only see across their own specific portions of the electromagnetic spectrum.

IBM is building a portable hyperimaging platform that “sees” across numerous portions of the electromagnetic spectrum collectively, to potentially enable a host of practical applications that are part of our everyday experiences.

How will hyperimaging affect our daily lives? In five years, it could aid in identifying the nutritional value of food, detect fraudulent drugs, deepen the augmented reality experience, or help make driving conditions more clear. For example, using millimeter wave imaging (a camera and other sensors), hyperimaging technology could help a car see through fog or detect hazardous and hard-to-see road conditions such as black ice.  Cognitive computing technologies will have the ability to draw conclusions about the hyperimaging data and recognize what might be a cardboard box versus an animal in the road.

In all, it sounds like a promising and cool new technology on the horizon.  Check out IBM’s other predictions for the big five in five innovations here.

Marvels of Micro Technology: Compact Camera Module Market to Reach $51B

So, what is on the cool technology docket for 2016? Some exciting new products will be coming onto the market, many of which will include connectivity and embedded cameras, ready for connecting to the internet of things (IoT).  We are thrilled that cameras are becoming more prevalent, and indeed, significant growth is projected in the compact camera module (CCM) market. The demand for thinner devices and higher quality cameras, as well as the now-essential automotive camera, are driving the market.

According to a new report by Yole Developpment (Lyon, France), the compact camera module market is likely to more than double by 2020, reaching $51 billion. Currently, mobile phone cameras account for 73% of the market. The automotive camera market is swiftly growing and will soon take over as second most prevalent in the market, expected to grow at a CAGR of 36%, and should reach $7.9B by 2020.

What else is driving growth? New technology shifts. In burgeoning areas such as 3D, computational, motion, and infrared cameras, multiple sensors, projectors, and others, high quality optics are required. Due to these shifts, the camera module will ultimately become the go-to product for multi-sensing.

Finally, in a boon to Kasalis’ area of the market, the assembly portion of the industry, Yole projected a compound annual growth rate (CAGR) of about 20% (image below). That’s great news, and we hope to embrace and evolve with these burgeoning shifts toward multi-sensing technologies throughout the next five years and beyond.

Compact Camera Module (CCM) Market 2015

The Internet of Things: Optics Opportunities

The Internet of Things (IoT) represents a vast array of opportunities for optics given the sheer number of technologies that will be connected to the internet in the future. From wearables to home monitoring systems, and from the tiniest camera modules to gesture recognition optics, the highest quality components will be in demand for groundbreaking technologies in our networked future.  In burgeoning healthcare, automotive, smart homes, and communication developments, exciting challenges await for our optics assembly equipment and, of course, the entire manufacturing sector.

Smart car, smart phone, smart watch, drone

The Internet of Things (IoT) will drive future growth. (Source: Jabil)

The Internet of Things is poised to be a major driver of economic growth in the near future. Cisco predicts that by 2020 there will be 50 billion things connected to the Internet, generating revenues of more than $19 trillion. However, building the IoT up to that level will not be a simple task.

In April 2015, Jabil sponsored a Dimensional Research global survey of more than 300 supply chain professionals at companies that manufacture electronics goods. While 75 percent of those surveyed are planning, developing or producing IoT-related products, 77 percent admit they lack the expertise in-house needed to deliver them.  That shows there are some major knowledge gaps that must be filled, but once they are, there is great potential for producing new internet-enabled products and services.

Those surveyed saw value in using data from the IoT to drive product innovation.  About half of them believed that data gathered from the IoT could potentially help in: delivering new product capabilities; creating new products, services, or business models; understanding failures to improve quality; and measuring feature usage to inform user design.  It is an exciting time for the Internet of Things as we look toward the future.  We at Kasalis hope to contribute meaningfully to the digital integration of the world around us.

Virtual and Augmented Reality: a Holographic Future

augmented reality minecraft

Microsoft HoloLens Display (Source: Wired)

Innovation is happening in the virtual reality and augmented reality universe, and VC firms are investing in it significantly.  Much like the “holodeck” in Star Trek – a room that can change into any location in the universe via holographic image—these wearable alternate reality devices plunge users into another world.  Oculus Rift is currently the leading virtual reality product, and, as recently announced in Wired, Microsoft has been developing what they call HoloLens, an augmented reality headset that layers a multi-dimensional cyber world on top of the real world.  These systems create an amazing array of opportunities to collaborate, visualize, create, experiment, and, of course, play.

Augmented reality headset

HoloLens Augmented Reality Headset (Wired)

The new Hololens’ depth camera has a field of vision that spans 120 by 120 degrees, so it can sense your hands even when they are almost outstretched.  As many as 18 sensors flood the device with data every second, all managed with an onboard CPU.  Users control the device by gesture recognition, voice, and gaze. Scenes might be anything from a 3D video game to the landscape of Mars.  In fact, the Mars hologram was so impressive that NASA has signed on to use the system right away so that agency scientists can use it to collaborate on a mission.

In addition to Oculus Rift, other virtual reality systems include the Zeiss VR One and the Samsung Gear VR.  The HoloLens, still in development, is being touted as very ambitious and bold, and will be a unique and groundbreaking augmented reality system that combines reality with virtual surroundings.  People will expect a thrilling ride when it arrives, and it will be a delight to see the inventive applications developers come up with to maximize this technology.

Consumer Electronics Show Debuts Camera-Centric Tech

CES Las Vegas LogoIn Las Vegas this week, the International Consumer Electronics Show is taking place, introducing attendees to the most cutting-edge technologies well before they are available to the general public.  This year, the most notable camera-centered products include security cameras with facial recognition and user-friendly medical cameras.

Netatmo Security Camera

Several sources, including CNET and Gizmodo, have published stories on Netatmo’s new security camera with facial recognition.  Its sophisticated software can recognize family members who are at home and provide a video feed that can be viewed remotely, should there be an intruder who is not recognized.

Camera for phone from CellscopeA startup called CellScope exhibiting at the show was featured on NPR; it has built a small ear probe that clips onto the top of your iPhone camera. Instead of rushing to the emergency room, this product allows you to stream footage from inside the ear to an app, so the images can then be examined by a doctor.

Finally, there were two exciting announcements from the show, first that Omnivision and Inuitive Ltd. have created a partnership to develop a reference design for building compact modules that enable 3D imaging in consumer electronics, which will enable advanced features such as augmented reality, 3D scanning, and post processing photography. Second, Jabil Circuit and Pelican Imaging also announced a partnership, this one to develop a high-resolution array camera module using the most advanced digital optics to allow for accurate depth acquisition.

Compact camera modules are essential to new technologies and consumer electronics of the future, because, as evidenced by these examples, when we produce more powerful images with high quality cameras, we can also create useful tools with which to utilize them.

Emerging Technologies and Active Alignment

In Gartner’s 2014 Hype Cycle for Emerging Technologies, there are several exciting areas in which technologies depend on optical components and camera modules for key functions – functions that likely are dependent upon clarity of images and require active alignment for their optics.  The most prominent are gesture control, virtual reality, augmented reality, and autonomous vehicles.  Of those, the most advanced one on the cycle is gesture control technology; according to Gartner, its “plateau of productivity,” in which mainstream adoption begins to take place, will be reached in 2-5 years.

Gartner's 2014 Hype Cycle for Emerging Technologies

Gartner’s 2014 Hype Cycle for Emerging Technologies

Gesture control technology has been embraced by companies that range from small venture-funded start-ups to large corporations looking for the next big thing.  Some companies, such as Samsung, are partnering with these start-ups to incorporate gesture recognition in their next generation models of televisions or other electronics.  Others are forging ahead with their own cutting-edge products; for example, Intel has recently publicized its wide-ranging RealSense Technology, by which a camera in the computer can see in 3D, recognize gestures, and take refocusable photos.

At Kasalis, we are fostering innovation at the intersection of software and optics, providing precision active alignment for optics that can then be used to clearly and accurately translate hand movements, or gestures, into commands that the software can understand.  We are thrilled that camera module and optical quality has become a top priority for the most cutting-edge technologies, and delighted, knowing that our technology plays a key support role in their advancement.

Now Available: The Pixid 300 Series, Groundbreaking Automated Active Alignment and Test System

We are thrilled to announce that our signature alignment and test system, the Pixid 300 Series, is now available to high volume manufacturers of camera modules. Our engineers have truly created an outstanding assembly system that leapfrogs existing standards to meet the future demands of fast-moving high technology industries.

Camera module assembly system Pixid 300 Pro

The groundbreaking Pixid 300 Pro is our signature model.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Why is the Pixid system different?  For starters, its innovative design allows for parallel processing of camera modules, so when one is being aligned, another is getting its adhesive. This contributes to our industry-leading cycle time of only 15 seconds, or 240 camera units per hour, all using active alignment for the absolute best image quality possible.  In addition, the yield of usable camera modules is higher using our machines because of our Adaptive IntelligenceTM software, which adjusts the alignment algorithm based on data trends.

Our systems cost significantly less than the industry average, both in terms of equipment and maintenance costs.  Our modular design makes the Pixid series easy to operate and very simple to repair or update.  With customer needs in mind, we have taken a new approach to active alignment and developed this system to meet their challenges head-on, including driving down the cost of active alignment, creating an intuitive, easily operated machine that doesn’t require an engineering degree, and lowering lead times by designing for the rapid configuration of systems.

The Pixid 300 Pro model is a fully automated manufacturing system featuring active alignment, configurable optical testing options, Adaptive IntelligenceTM SPC, and automated adhesive dispense and UV cure.  In addition to the Pixid 300 Pro, the line includes the Pixid 300 Test model, which is intended purely for the high speed, final functional testing of camera modules. The Pixid 300 systems are designed for high volume production of camera modules such as those used in smartphones, cars, webcams, medical imaging, wearable sports cameras, and security cameras.

Digital Assistant for the Blind: Camera Module Provides “Sight” in EyeRing

The EyeRing system, invented at MIT Media Lab’s Fluid Interfaces Group, is a chunky ring device with a camera module mounted on it that can aid the blind by providing audio responses about what is in front of them.  For example, in the EyeRing video (below), a man is shopping and commands the ring to detect the color of the shirt he holds.  The image is sent through the system and the result is translated into words; the EyeRing responds, “grey.”

When the camera snaps a photo, it is sent to a Bluetooth-linked smartphone. A special Android app processes the image using computer-vision algorithms and then a text-to-speech module to communicate the results through earphones.  So far, the device is capable of detecting currency type, color, and the amount of open space ahead (the “Virtual Walking Cane”).

Communication from camera module to mobile phone

The camera module sends an image to the mobile phone app, which then translates the image into words and tells the user what it sees. Image: MIT

 

Camera detecting type of currency

The EyeRing camera will identify currency for the vision impaired. Image: MIT

 

Although commercialization is likely at least two years away, the potential for this type of technology to help the blind to “see” what is in front of them is huge.  The team is currently working on the next prototype, incorporating more advanced capabilities for the device, such as potentially reading non-braille words, taking real-time video, and adding sensors and a microphone.  The design will also be streamlined to be smaller and have a lower center of gravity. While finger-worn devices are not new, most of the existing ones have been designed for people with sight, so this is truly an exciting breakthrough for the visually impaired.

EyeRing – A finger-worn visual assitant from Fluid Interfaces on Vimeo.