Friday, January 29, 2016

Sony Earnings Call Transcript

SeekingAlpha publishes Sony quarterly earnings call transcript. Few quotes on the company's image sensor and camera module business status:

"We believe that demand for our devices could decelerate in the near term. In our image sensor business, we have changed our forecast to assume a slowdown in the growth of the market for smartphones. In fact, there's a risk that the market for high-end smartphones might decrease due to the issue in emerging markets I just mentioned.

In light of this situation, the management team of Sony is taking quick action. We have instructed our sales team to more aggressively approach smartphone makers, particularly those we had to turn away last year when we were supply-constrained. We have made the decision to postpone our plan to reach production capacity of 87,000 wafers per month by the end of September 2016.

Finally, we are seriously considering utilizing a portion of our facility in Oita we bought from Toshiba to manufacture logic instead of photodiodes which could lead to a reduction in the cost of our sensors. Although we are taking these actions so as to mitigate the downsized rate in this business, we are confident that in the long-term prospects of image sensors, because we think there is room to expand their use in [indiscernible] cameras, automobiles and the Internet of Things.

We also believe that one of our competitive advantage in the image sensor space comes from the fact that we manufacture the sensor in-house. Thus, we believe that the investments we have made in production capacity for sensors will be variable going forward.
"

"The possibility that we might have to impair assets in our camera module business comes from problems we had when we're starting up this business. And the decrease in projected future demand from high-end smartphone makers."

"Last fiscal year, we received larger orders than expected in image sensor business. This cause our ability to supply the market to be constrained, a situation that continued through the beginning of current fiscal year.

Then, in the summer of 2015, we had an issue with our production equipment, which resulted in our having to decline orders from the 13 customers. After we resolved this production issue and after new capacity had just come online orders from our customers started to decline due to softer end-user demand for smartphones.

Further complicating matters were the fact that we supply custom design sensors to some of our major customers, and there is an approximately five-month lead time to manufacture our sensors. As a result, it is difficult to switch our products in line over the other customers quickly. We believe that image sensor business will start to recover from the first quarter of next fiscal year, but we will formulate our business plan based on an assumption that growth of the smartphone market will slow.
"

Quotes from Q&A sennsion:

"Well, we expect to make some recovery in first quarter in fiscal year 2016 because we already have orders for that. And as I said, we are expecting now almost no growth in the smartphone market. So, I think we are currently conservative about the prospect of the fiscal year 2016.

However, currently, we are aggressively doing the promotion of the image sensor business particularly for the Chinese player which we lost when our production is constrained. So, at this moment, we are in the middle of the budget process. So I cannot specify the prize of worrying over utilization as this point in time.
"

"Well, in capacity, we have a plan to increase our capacity to 87,000 by September this year and approximately 20% of that capacity increase here so-called half portion. So, currently we are evaluating the current production and I think the production adjustment would be quite concentrated in that 20% portion. However, we haven't decided whether we will reduce or we will change the current planned expansion [indiscernible] at this point in time."

"Well, for next year, our so-called dual lens – dual camera platform will be launched by, we believe, from major smartphone players. However, as I said previously, recently, our smartphone market is growing and particularly, our high-end smartphone market is now slowing down. So, that may impact the demand or production schedule of dual camera smartphones by the major smartphone manufacturers. So, we believe the real start, the takeoff of smartphone with dual lens camera will be in the year of 2017."

"And as for the modules, as you may know, we are a newcomer in this business. And two years ago, we started this business. However, at the beginning, we had failed to supply the initial product. And after that, we are gradually improving our production itself. And now we are keeping very high yield level. At this moment, the forecast for the smartphone business itself is now declining. That's why we explained about the possibility of the impairment of the module.

And as for the size or timing of the impairment of the module, at this point in time, we cannot comment on that. But as Takeda-san said, approximately a little bit less than 15% of the Device asset is in module.
"

Sony Reports "Significant" Decrease in Image Sensor Sales

Sony reports its earnings for the fiscal quarter ended on Dec. 31, 2015. For the Device segment, "Sales decreased 12.6% year-on-year (a 16% decrease on a constant currency basis) to 249.9 billion yen (2,082 million U.S. dollars). This decrease was primarily due to a significant decrease in sales of image sensors, reflecting a decrease in demand for mobile products, and a significant decrease in battery business sales. This sales decrease was partially offset by an increase in sales of camera modules which were lower than originally forecasted and the impact of foreign exchange rates. Sales to external customers decreased 7.5% year-on-year.

Operating loss of 11.7 billion yen (97 million U.S. dollars) was recorded, compared to an operating income of 53.8 billion yen in the same quarter of the previous fiscal year. This significant deterioration was primarily due to the deterioration in the operating results of the battery business, including the recording of a 30.6 billion yen (255 million U.S. dollars) impairment charge related to long-lived assets, increases in depreciation and amortization expenses as well as in research and development expenses for image sensors and camera modules, and the impact of the decrease in sales of image sensors.
"

Sony also revises its future sales forecast:

"Sales are expected to be lower than the October forecast primarily due to significantly lower than expected sales of image sensors and camera modules, reflecting a decrease in demand for mobile products and lower than expected sales in the battery business. The forecast for operating income is expected to be significantly lower than the October forecast primarily due to the impact of the above-mentioned decrease in sales and the recording of an impairment charge related to long-lived assets in the battery business during the current quarter.

Sony is currently formulating its business plan for all of its business segments for the fiscal year ending March 31, 2017. With regard to the camera module business, there is a possibility that factors such as a decrease in projected future demand, which caused a downward revision in the forecast for the current fiscal year for the business, could continue to have a negative impact on the business going forward. It is therefore possible that the above-described business environment might result in an impairment charge against long-lived assets in the camera module business.
"


Update: Few more slides from the Sony earnings webcast:


The webcast gives quite many details and explanations on the image sensor and camera module business, starting from time 6:49 to 12:26. In Q&A #1, Sony says that it expects the business recovery to start in Q1 next fiscal year, beginning in April 2016.

EETimes on Stacking and FD-SOI in Image Sensors

EETimes publishes its analysis of stacking and FD-SOI trends in image sensors, helped by Yole Developpement. Few quotes:

"Yole estimated that in 2015, 27% of CIS revenues were generated from stacked chips, which the firm described as “roughly the market share of Sony.”

It is
[Pierre Cambou's, activity leader, Imaging & Sensors at Yole Développement] opinion that “up to now, only Sony is mastering the [chip stacking] technique.” Although Samsung and Omnivision publicized stacked chip releases, they have not been able to scale up, Cambou observed.

One Japanese industry source, who spoke on the condition of anonymity, told EE Times that he suspects Sony is likely well advanced in its FD-SOI project for CMOS image sensors.

Yole’s Cambou believes that FD-SOI could open a host of new possibilities for next-generation CIS. The challenge of FDSOI is the added cost (per mm2) that puts even more strain on the yield issue, said Cambou. At the same time, it is “probably a good opportunity for Sony to deepen the gap with its competitors.


Two Yole's slides from the article:

Thursday, January 28, 2016

Samsung and OmniVision Stacked Sensors Reverse Engineering

Chipworks publishes Samsung S5K3P3SX and OmniVision OV23850 1st generation stacked sensors analysis reports. Few slides from the Chipworks' presentation:

Hua Capital, CITIC Capital and Goldstone Investment Complete Acquisition of OmniVision

PRNewswire: OmniVision and a consortium composed of Hua Capital, CITIC Capital, and Goldstone Investment announce completion of the previously announced acquisition of OmniVision. Trading of OmniVision common stock on NASDAQ has been halted before the opening of the market today and will be suspended effective as of the close of business today. OmniVision stockholders will receive $29.75 per share in cash, or a total of approximately $1.9 billion.

Movidius Signs "Substantial" Sales Deal with Google

Independent: Movidius signes a "substantial" sales deal with Google. The deal is said to be the first in an expected series of announcements to be made by Movidius with big international companies as it seeks to establish a global position at the top of the emerging IoT market.

Google intends to use Movidius vision processor to build as-yet unnamed products that can handle large amounts of computer processing by themselves without having to beam the raw data back to servers or data centres for interpretation.

"Think of a security camera," Sean Mitchell, co-founder and CEO of Movidius. "Using our chip, it can understand what it's seeing or hearing without being told by a more powerful machine in a data centre. In this way it can act with more autonomy and in a more unsupervised way."

"What Google has been able to achieve with neural networks is providing us with the building blocks for machine intelligence, laying the groundwork for the next decade of how technology will enhance the way people interact with the world," said Blaise Agϋera y Arcas, head of Google's machine intelligence group in Seattle. "By working with Movidius, we're able to expand this technology beyond the data center and out into the real world, giving people the benefits of machine intelligence on their personal devices."

Update: Movidius publishes a Youtube video with explanations:

Wednesday, January 27, 2016

ST Reports SPAD Business Success

STMicro Q4 2015 earnings call gives few bits of info on its SPAD business: "in our Imaging business, we started to demonstrate success with our refocused strategy of specialized image and photonic sensors. In fact, our FlightSense technology was integrated in over 20 phones during 2015 and we passed the milestone of 50 million units shipped."

Self-Driving Car Forecast

McKinsey & Company comes with both more optimistic and less optimistic forecasts of self-driving car adoption - the next big market for image sensors:

Videantis Partners with Almalence

Videantis licenses Almalence image enhancement algorithms that increase resolution, improve low light sensitivity, and expand DR. Videantis licenses low-power vision/video processor IP suited to run such algorithms more efficiently in silicon, said to achieve up to 100x performance increase, and 1000x power reduction as compared to CPUs and GPUs. Besides targeting mobile phones, the purpose for cooperation is to bring Almalence’s SuperSensor technology to the automotive semiconductor market.

Hans-Joachim Stolberg, videantis CEO, said, “The next big increase in image quality will not come from new lenses or sensors, but from computational photography algorithms such as Almalence’s. Our customers are already experiencing the “wow” effect of the Almalence algorithms and we’re excited to bring their revolutionary software to the videantis processor architecture, enabling the low power and high performance levels that are needed to bring this technology to embedded camera devices.

As we started porting our algorithms to the videantis processor, we were impressed with the low power and high performance levels achievable. We can now run our most complex algorithms in higher resolutions and frame rates,” stated Eugene Panich, Almalence CEO. “It’s clear to us why several semiconductor companies have already adopted the videantis processor architecture for their visual processing needs.

Tuesday, January 26, 2016

Image Processors at ISSCC 2016

EETimes publishes a nice summary of image and vision processor papers at the oncoming ISSCC:

"Image processing in fact is one of the most popular ISSCC topics, appearing in the session on “Digital Processors” and again in the session on “Next-Generation Processing.” While dramatic new application areas for image processors include gesture-recognition and augmented reality, automotive driving assistance systems (ADAS) are among the most popular. With research and development on autonomous vehicles increasing, the need for faster detection of obstacles in a vehicle’s path becomes acute. Head mounted displays with augmented reality (HMD/AR) systems are intended to calculate the scenario for what’s going to happen on the roadway ahead of a speeding car. Processors in the imaging sessions will describe the impact of deep learning algorithms (like convolutional neural networks, CNN or K-nearest-neighbors, KNN). These processers support a range of machine learning applications, including computer vision, object detection (apart from what’s on the roadway), and handwriting recognition."

"One paper from KAIST (in the “Next-Generation” session) will present a low-power natural user interface processor with an embedded deep learning engine. The device is fabricated in 65nm CMOS. It claims a higher recognition rate over the best-in-class pattern recognition. Another paper from KAIST presents a dedicated high-performance advanced driver assistance SoC, capable of identifying potentially “risky objects” in automotive systems. This chip is also implemented in 65nm CMOS, and, Kaist claims, was successfully tested in an autonomous vehicle."

"In the “Digital Processors” session, Renesas will present a 12-channel video-processing chip for ADAS — implemented in 16nm FinFET CMOS."

Image Sensor Courses in Europe

CEI-Europe announces four Image Sensor courses to take place in Barcelona and Amsterdam in Q2 2016. The courses leader is Albert Theuwissen.

Introductory level courses, aimed at engineers, managers and product developers new to the field, are:
And for those with a good understanding of electronics, physics and mathematics, advanced courses include

IS Auto vs AutoSens

Robert Stead sent me an email clarifying the difference between IS Auto and AutoSens conferences:

"It has come to my attention recently that the position regarding IS Auto and AutoSens is not 100% clear, and there is some confusion as a result of both events being in the marketplace and who is organising each. As such I wanted to write to you personally to clarify the situation and ensure you have all the information so you can decide which event you attend in 2016.

In 2014, I was working with Smithers and set up the IS Auto conference along with conference chairman Patrick Denny, and several other advisors including Sven Fleck, Alain Dunoyer, Martin Edney, Martin Punke and Mike Brading. We ran two very successful events in 2014 and 2015.

In July 2015 I decided to leave the Smithers company and set up my own business (Sense Media) in order to focus on events in digital imaging and other sensor fields. The first of these events is AutoSens, which will retain focus on automotive imaging as a core topic, but which will broaden the scope slightly to include RADAR and LiDAR sensors, as well as more in depth discussion of image processing, standards, system architecture, computer vision and more.

My company will focus on growing this event and community year-round, and broaden the coverage for automotive imaging with increased engagement with the academic community, more international attendance and several new initiatives designed to benefit the engineering community.

All of the advisors who have worked with me previously have all decided to join the AutoSens board and prioritise this event, I am delighted and humbled to have their support and we all believe that AutoSens is the best way to support the automotive imaging community. In Patrick Denny's words, “I believe what Rob is doing with AutoSens is in the best interests of my customers, my suppliers, and my competitors and I give my full backing to this project”.

Other figures in the digital imaging world who have backed AutoSens include Albert Theuwissen, who will deliver an expert workshop at the AutoSens conference.

This message is just intended to present you with the facts, and clarify who is behind the AutoSens event. The AutoSens team and Advisory Board are highly motivated to deliver an event that is of high technical quality and helps engineers throughout the supply chain who are working to develop and improve automotive vision systems. We hope you will join us on this journey.

A full speaker announcement will come next week including confirmed speakers from both established and new entrant OEMs, as well as news about other exciting developments.
"

Sony to Acquire LTE Modem Maker

Sony has reached an agreement with Israeli LTE modem maker Altair Semiconductor to acquire the company for $212M.

"By combining Sony's sensing technologies - such as GNSS (Global Navigation Satellite System) and image sensors - with Altair's high-performance, low power consumption and cost-competitive modem chip technology, and by further evolving both, Sony will strive to develop a new breed of cellular-connected, sensing component devices."

"With the acquisition of Altair, Sony aims to not only expand Altair's existing business, but also to move forward with research on and development of new sensing technologies."

"With the markets for wearable and IoT devices expected to continue to expand, Sony aims to deliver component devices that feature both sensing and communication capabilities."

Sony expects to complete the acquisition in early February, 2016.

Monday, January 25, 2016

Heptagon Announces Multipoint ToF Rangers

BusinessWire: Heptagon introduces SHILAH and TRINITY ToF 3DRanger sensors. Each sensor can be dynamically configured for a single point or a range of independent measurement points. SHILAH provides 12 measurement points with a 70-degree wide field of view, and TRINITY provides 9 measurement points in a 30-degree field of view. Both are complete modules with an integrated microprocessor, algorithms, optics, a ToF sensor and a light source. Heptagon patented “phase modulation” ToF pixel measures the distance up to 3 meters in normal lighting conditions depending on configuration.

The new multipoint rangers enable applications like scene analysis, edge extrapolation or advanced object detection. In smartphones, for example, these 3DRangers will allow fast autofocus lock for primary and front-facing cameras.

ToF Imaging Tutorial

Austrian JOANNEUM RESEARCH Forschungsgesellschaft mbH publishes ESSCIRC 2015 tutorial on ToF imaging by David Stoppa, FBK, delivered on Sept. 18, 2015. Few slides:

Sunday, January 24, 2016

Sony CIS Roadmap

Sony publishes an overview of its CPSE 2015 exhibition booth with info on its new sensors, some to be announced in 2016:

Friday, January 22, 2016

Lindsay Grant Joins Omnivision

As Lindsay Grant shows on his LinkedIn page, he has left ST Imaging after 16 years, and joined Omnivision in Santa Clara, CA as VP of Process Engineering.

Sony to Use FD-SOI in its Stacked Sensors

EETimes reports from FD-SOI Forum held on Jan 21 in Tokyo, Japan: "the biggest FD-SOI news, which surfaced as chatter and whispering during coffee breaks at the Forum (rather than on the formal agenda), is that Sony is looking to use FD-SOI for the image signal processor (ISP) on stacked CMOS Image Sensors (CIS).

Although this buzz was also confirmed outside the Forum, neither Globalfoundries nor Sony is talking.

Three industry sources, however, independently told EE Times that chip stack CIS will open FD-SOI’s much needed, genuine volume market. Sony, today, is the world’s largest CIS supplier.

Word on the street is that Sony will be working with Globalfoundries on chip stack CIS, instead of Samsung. The Japanese consumer electronics giant wants to avoid any potential conflict with Samsung (who is also in the CIS business).
"

Ray Fontaine, Senior Technology Analyst at Chipworks, said that Sony is using its 65nm process for some of its stacked chip ISPs and TSMC 40nm for the ISPs of others (including recent iPhone stacked chip CIS).

Pierre Cambou, activity leader, Imaging & Sensors at Yole Développement, said that using FD-SOI for ISPs “would be a very interesting technical option to minimize the heat generated by the ‘ISP’ secondary chip.

Presently, Sony uses Globalfoundries FD-SOI process for its 10mW low power GPS chip, with main end application being Casio GPS watches. However, its next generation has been reported to use ST 28nm FD-SOI process.

Update: Samsung FD-SOI slide suggests that there is an activity on integrating it with CIS:

Thursday, January 21, 2016

Dynamax CEO Murder Trial

Rochester Democrat & Chronicle publishes a report from the Dynamax Imaging CEO Jim Tan murder trial. "Tan's son, Charlie, was charged with second-degree murder in the death. The trial ended in a mistrial as jurors did not reach a unanimous verdict and County Court Judge James Piampiano later made the startling ruling to dismiss the charges against the 20-year-old Cornell University sophomore altogether."

"Coworkers — who later either couldn’t be reached or declined to discuss Jim Tan — testified that the 49-year-old could be tyrannical and brow-beating.

Michael Sullivan, a senior production manager at Dynamax who’d worked with Jim Tan even before Tan founded Dynamax in 2003, said Jim Tan was a bully who tried to intimidate him and would “tell people different stories to pit them against each other” and would gather workers together to belittle them in front of their coworkers.

Meghan Johnson, a 27-year-old production technician at Dynamax and likely the last person to communicate with Tan through email, said she largely avoided Tan’s tantrums, but that she had seen them — seen him yell, belittle people and throw things around the office.
"

Thanks to MJ for the link!

Wednesday, January 20, 2016

8th Fraunhofer IMS Workshop on CMOS Imaging - Seeing the Future

The 8th Fraunhofer IMS CMOS Imaging Workshop in Duisburg, Germany will takes place on May 9-10, 2016. The workshop agenda includes a lot of interesting stuff:

Advances in CMOS Imaging
  • CMOS Image Sensors: Masterpieces of 3D-Integration
    Albert Theuwissen, Harvest Imaging
  • Improving Customization by Image Simulations
    Karsten Sengebusch, Eureca
  • Switching from Sensing for Imaging to Imaging for Sensing
    Pierrre Cambou, Yole
Technology and Testing
  • Foundry Services for CIS
    Gerhard Spitzlsperger, LFoundry
  • Test Solutions for High End Imagers
    Marcus Verhoeven, Aspect Systems
Single-Photon Sensing
  • SPADs in CMOS Technology
    Alexander Schwinger, Fraunhofer IMS
  • SPAD Sensors for 3D Range Finding
    Carl Jackson, SensL
  • Time-Correlated Single Photon Counting with SPAD Arrays
    Simone Tisa, MPD
3D-Imaging
  • Miniaturization of 3D sensors
    Markus Rossi, Heptagon
  • Practical Depth Imaging
    Giora Yahav, Microsoft
Automotive Sensors
  • Driving Vision Applications in ADAS
    Heinrich Gotzig, Valeo
  • Optical Sensors for ADAS
    Thomas Fechner, Continental
Advanced Applications
  • Embedded Cameras for Drones
    Benoit Pochon, Parrot
  • Image Sensors for Space Applications
    Bart Dierickx, Caeleste
  • Trends in the Industrial Camera Market
    René von Fintel, Basler

MALS Focusing System in AR Applications

Korea-based SD Optics publishes a demo video of its MALS focusing system in AR applications:

Apple Code Hints of Internal Work on Li-Fi

AppleInsider reports that the recent versions of iOS code have been found to contain references to Li-Fi. "In addition to the software references, Apple is known to be working on hardware implementations for light-based wireless data transfer, or optical wireless communication."

Tuesday, January 19, 2016

Himax Launches Ultra Low Power Sensor for Computer Vision Apps

GlobeNewsWire: Himax Imaging announces the HM01B0, an ultra-low power QVGA sensor that consumes less than 700µW when operating at QQVGA resolution of 30 fps, and less than 2mW when operating at QVGA resolution with support for even lower power modes.

The HM01B0 ultra-low power consumption allows the sensor to be placed in a constant state of operation, enabling “always on”, contextually aware, computer vision capabilities such as feature extraction, proximity sensing, gesture recognition, object tracking and pattern identification.

The HM01B0 integrates a motion detection circuit with an interrupt output pin, and an automatic exposure and gain control loop to minimize host processor computation as well as data communication to reduce system power. The sensor utilizes an advanced 3.6µm pixel technology that offers sensitivity of below 1 lux. The sensor’s reflowable chip scale package measures less than 5mm2 and requires only three passive components to support a highly compact camera module and miniature wafer level module assembly.

Our image sensors for notebook and smartphone applications, such as our ¼" 8MP MIPI sensor, have been among the lowest power in the industry,” stated Jordan Wu, CEO of Himax Technologies. “We believe that the HM01B0 is the lowest power CMOS image sensor in the industry with similar resolution, while offering outstanding sensor performance and high level of feature integration. We are excited to build upon our core competence to develop a new class of sensors that will support very low power computer vision to enable new applications across smartphones, tablets, AR/VR devices, IoT, and artificial intelligence for consumer, medical, and industrial markets. With this new ultra-low power sensor, Himax has been working with leading consumer electronic brand customers and major platform providers to help develop innovative features and reduce power consumption of existing cameras. We have received a good level of interest from quite a few of the industry’s leading players.

Himax believes its HM01B0 is the best solution on the market to meet the ever-growing computer vision and power saving expectations and can be universally adopted for mobile devices, AR/VR devices, IoT, and artificial intelligence applications. The HM01B0 will be available in both monochrome and color options. The sensor can also integrate into Wafer Level Modules to be available for selected customers and partners in Q1 2016.

FlexEnable and ISORG Present 1MP Flexible Image Sensor

FlexEnable (used to be Plastic Logic) and ISORG reveal the world’s first large area flexible fingerprint sensor on plastic designed for biometric applications. With an 8.6 cm x 8.6 cm active area, 84µm pitch (78µm pixel size with 6µm spacing) and 1024 x 1024 = 1048576 pixel resolution, the flexible sensor is 0.3 mm thick and can operate in visible and NIR up to wavelengths of 900 nm. Other that fingerprint, the technology is also capable of measuring the configuration of veins in the fingers, providing additional security versus that of a surface fingerprint alone.

This new sensor is made by deposition of organic printed photodetectors (OPD) by ISORG onto a plastic organic thin-film transistor (OTFT) backplane, developed by FlexEnable. The large label-thin sensing area can be applied to almost any surface – and even wrapped around the objects in our daily lives that we typically come into contact with – such as a car steering wheel that recognises the driver as soon as the wheel is touched, or a credit card with integrated biometric detection.

Chuck Milligan, CEO of FlexEnable, said: “FlexEnable’s ground-breaking flexible electronics technology in combination with ISORG’s unrivalled expertise in OPDs and large area image sensors brings game-changing capabilities for biometric detection that can be applied to almost any surface – anything from door handles to wrists. For example, imagine a mobile device whose surface or edges know who is holding or touching the device. Such capabilities are viable because of the flexibility, thinness, and much lower cost per unit area compared to silicon area sensors.

Jean-Yves Gomez, CEO of ISORG, said: “This break-through development will spark the creation of next-generation products in biometrics. No other solution can offer large area sensing as well as finger print and veins recognition while being flexible, light and robust. Moreover, our team is able to provide reference design as well as image improvement algorithms and illumination solutions to ease the sensor integration into new applications.

Biological Eye Evolution

National Geographic publishes a nice article on biological eye evolution and vision of different animals:


The magazine also publishes a short video on that:



Thanks to DSSB for the link!

Monday, January 18, 2016

Camera Module Industry Report

ResearchInChina publishes the 2015 update of its "Global and China CCM (CMOS Camera Module) Industry Report." Few quotes, starting from CMOS image sensor market overview:

"In CIS field, shipments are expected to amount to 4,196 million units in 2015 and 4,390 million units in 2016, up 8.8% and 4.6% against the previous year, respectively, compared with annual growth of 11.5% in 2014, indicating a further slowdown. The market size is predicted to be USD9.16 billion in 2015 and USD9.628 billion in 2016, a year-on-year rise of 4.6% and 5.1%, respectively, compared with annual increase of 10.7% in 2014. Except for On-Semi, Sony and Sharp, all other vendors experienced declines... On-Semi is a bellwether in automotive CIS field, seizing nearly 50% market share. As integration went well very after acquisition of Aptina by On-Semi, combined with explosive growth in automotive camera market, On-Semi embraced rapid development in its CIS business, and is expected to record revenue of USD720 million in 2015, including USD400 million from automotive field, a surge of more than 100%.

Global CCM market size was worth USD16.247 billion in 2015, a year-on-year rise of 3.8% from 2014, the slowest rate since 2010. It is expected that growth rate will continue to decelerate in 2016, only 1.3%, but bounce back slightly to 1.6% in 2017 with a market size of USD16.732 billion.

Largan Precision still outshined others in Lens field with high-speed growth, while the rest of vendors almost all suffered setback in Lens business, except for the vendors with automotive Lens business which witnessed significant growth in such field but with a low ASP. Sunny Optical still ranked first globally in automotive Lens; Kantatsu, a subsidiary of Sharp, made its way into the supply chain of Apple with considerable performance growth.

Japanese companies dominated OIS market with Alps and Mitsumi tied for the first place and both being major suppliers for Apple. In addition, Mitsumi also aggressively marched into Chinese mainland market, planning to invest JPY25 billion to expand capacity over the next two years with the aim of competing for the global champion with Alps.

Bi-Direction and Close-Loop have become two main technologies in VCM field. Japanese vendors exited from low-end VCM field and focused on OIS or Bi-Direction and Close-Loop. The emergence of mainland Chinese companies in low-end market resulted in fierce competition.

In an increasingly competitive CCM field, the majority of companies were caught in the price war and the market became more concentrated. Many vendors registered higher shipments but smaller revenue, even the number of pixel increased. Sharp, performing the best, became the second largest supplier for Apple that placed more orders with Sharp so as to reduce its reliance on LG-INNOTEK, but LG-INNOTECK was still the largest supplier for Apple and ranked first by revenue globally. Cowell, the third largest supplier for Apple, also did very well, and was one of few companies with improved gross margin. SEMCO won more orders from Samsung. Sunny Optical, the No. 1 mainland Chinese vendor, maintained the momentum of strong growth but with a stagnant gross margin, and started shifting its focus to Lens field in the hope of raising its overall growth margin. LITEON selectively gave up low-end business and saw a decline in orders from its major customer Samsung, leading to a collapse in revenue. MCNEX found strong growth in revenue by relying on automotive business.
"

Sunday, January 17, 2016

Ceva XM4 Vision Processor Gets Best Processor IP Award

MarketWired: The Linley Group announces the winners of its annual Analysts' Choice Awards which recognize the top semiconductor products of 2015. Ceva XM4 vision processor won the Award in Best Processor IP category.

"Our awards program not only recognizes excellence in chip design and innovation, but also recognizes the products that our analysts believe will have an impact on future designs," said Linley Gwennap, founder and principal analyst at The Linley Group. "These products significantly improve the design of systems in their target applications."

CEVA XM4 applications

Saturday, January 16, 2016

e2v Proposes Pulsed Antiblooming Gate

e2v patent application US20160005785 "Image sensor with anti-blooming gate" by Frédéric Barbier and Frédéric Mayer gives the following explanation of the excessive dark current resulting from a positive bias of the antiblooming gate G5:


"If the potential applied to the gate G5 is 0.6 to 1.1 volts, the potential in the active layer 12 beneath the gate G5 will be positive, equal to around 0.2 volts for example. There then exists a strong local electric field beneath the gate at the surface of the silicon towards the edge of the photodiode which is maintained at 0 volts by the surface region 16. This electric field acts by lowering the forbidden band of the semiconductor and by therefore increasing the probability of electrons passing into the conduction band. This is a physical effect of band-to-band tunnelling, which creates a leakage current. Electrons are generated beneath the gate without the lighting being the cause; they will go to be stored in the photodiode with the highest potential. This current can be likened to a dark current since it exists independently of the lighting. This dark current, specifically due to the presence of a difference between the potential beneath the gate and the surface potential of the photodiode, is particularly bothersome when detection of weak lighting is desired. It can be several hundred times higher than if the potential beneath the gate was nil."

Whether this explanation is correct or not, the patent application proposes a pulsed anti-blooming bias to minimize the dark current:

Friday, January 15, 2016

Sony to Start Automotive Sensor Mass Production in May 2016

Nikkei reports that Sony plans to start mass production of its automotive CMOS image sensor in May 2016. Initially, Sony planned to start volume production in December 2015. Slightly delayed, the first automotive product is expected to be IMX224MQV, a 1/3-inch 1.27MP CMOS sensor, announced in Oct 2014 and sampled since Nov 2014. Sony says that the sensor is able to shoot "high-quality color video" even with a luminance of 0.005lx, which is equivalent to a luminance on a dark night (darker than starlight).

If the mass production begins in May 2016 as scheduled, the IMX224MQV might be used in vehicles to be released in 2018. EuroNCAP (European New Car Assessment Programme) will start to evaluate the capability of avoiding collision with passengers and bicycles at night in 2018. "The year 2018 is especially important," Sony said. "High sensitivity for nighttime shooting is important, and we will utilize our strength improved in the smartphone market."

Thursday, January 14, 2016

NIT Announces WDR InGaAs Cameras

New Imaging Technologies announces new analog WDR InGaAs cameras series in 320x256 pixel (QVGA) or 640x512 pixels (VGA). The analog WiDy SWIR cameras are available in CCIR (25fps) or EIA (30fps) version with either rolling QVGA 320A or global/snapshot QVGA 320A-S and VGA 640A-S. The cameras are TECless and feature over 140dB intra-scene DR.

ToF Camera Goes Under Water

Optics.org: Researchers at SINTEF, Norway, are working with partners across Europe to develop sensors and lasers for under water ToF camera. The EU project UTOFIA (Underwater Time Of Flight Image Acquisition) has a budget of €5.7M, and will continue till 2018 as part of the European research program Horizon 2020. The other partners in the project are Bright Solutions (Italy), a Fraunhofer research center (Germany), Odos Imaging (UK), Subsea Tech (France), AZTI (Spain) and DTU Aqua (Denmark).

The biggest problem with traditional cameras is that their range is reduced in poor visibility, particularly in coastal waters made turbid by suspended sand and clay particles. Such cameras have a very limited range under these conditions”, said Project Manager Jens Thielemann at SINTEF.

The camera shutter is kept closed for approximately 50ns before it opens.
When the first 50 ns is gated out, most of the backscattering
contribution to the noise is removed.

Thanks to SO for the link!

DARPA Explores Fundamental Limits of Photon Detection

DARPA scientists suspect that the performance of light-based applications could improve by orders of magnitude if they could get beyond conventional photon detector designs—perhaps even to the point of being able to identify each and every photon relevant to a given application.

DARPA’s Fundamental Limits of Photon Detection—or Detect—program aims to establish the first-principles limits of photon detector performance by developing new fully quantum models of photon detection in a variety of technology platforms, and by testing those models in proof-of-concept experiments.

The goal of the Detect program is to determine how precisely we can spot individual photons and whether we can maximize key characteristics of photon detectors simultaneously in a single system,” said Prem Kumar, DARPA program manager. “This is a fundamental research effort, but answers to these questions could radically change light detection as we know it and vastly improve the many tools and avenues of discovery that today rely on light detection.

Wednesday, January 13, 2016

EMCCD Wins 2015 Product of the Year Award

ON Semiconductor’s KAE-02150 EMCCD wins Electronic Products Magazine 2015 Product of the Year Award:

Invisage at CES

Invisage publishes media responses on its demos at CES 2016:

Click for a larger version

Pixart to Buyback Shares

Digitimes: Pixart plans to buy back up to 2M of its shares from January 12 to March 11. The repurchased shares will be distributed to employees, the company said. This share buyback is the third conducted by the company since September 2015.

Due to rise in shipments of automotive and security & surveillance sensors, as well optical mice sensors, Pixart revenues increased 1.2% QoQ to NT$1.08 billion (US$32.3M) in Q4 2015. Pixart reported revenues of NT$4.32 billion for 2015, down about 9% on year.

Tuesday, January 12, 2016

Biometric Technologies Market Share

TI white paper on biometrics publishes market shares of the biometric technologies. Optical imaging approaches dominate the market by far:

Volkswagen Announces Strategic Partnership with Mobileye

Volkswagen announces an automatic driving strategic partnership with Mobileye:

ON Semi Presents 47MP CCD

BusinessWire: ON Semiconductor announces the KAI-47051 CCD, the world’s highest resolution Interline Transfer CCD device. The 47MP KAI-47051 increases the resolution available for applications such as end-of-line flat panel inspection and aerial mapping by more than 50% compared to the 29MP KAI-29050 CCD widely used in these applications today. This is achieved while retaining the CCD-level image uniformity and global shutter architecture those applications require. The new device is aimed to the growing inspection demand for higher resolution smartphones, tablets, computer monitors, and televisions; and to improve image quality and overall efficiency in surveillance applications such as aerial mapping.

In addition to providing higher resolution through a larger optical format, the KAI-47051 incorporates a reduced-noise amplifier that lowers read noise by 15% compared to the existing device, increasing DR to 66dB. A 16-output architecture enables a maximum frame rate of 7fps, almost double that of the existing, lower resolution device.

One of the nice features of ON Semi CCDs is a fairly detailed spec published together with the new product announcement:

Monday, January 11, 2016

Pixelplus to Manufacture Sensors at TPSCo

GlobeNewsWire: TowerJazz, TowerJazz Panasonic Semiconductor Co. (TPSCo), and PIXELPLUS announce they have collaborated to produce a state-of-the-art HD and FHD (full HD) SoC security sensor using TPSCo's leading 65nm CIS process. Production is expected to start at the beginning of 2016 at TPSCo's 12" fab in Japan.

PIXELPLUS integrates ISP and HD-Analog transmission function onto a CMOS sensor, said to be the first time in the world. The HD-transmission function enables data transmission over coaxial cables to distances longer than 500 meters.

PIXELPLUS is said to hold number one position worldwide in the security/surveillance market, which includes 34% market share in 2014 by dominating VGA.

"Through our collaboration with TPSCo, we were able to produce an HD and FHD SoC security sensor with unprecedented performance. We are excited to begin production as this type of business growth to FHD/HD is expected to provide us with a significant contribution in revenues and will keep PIXELPLUS on top in the future with the world's best image quality for CMOS image sensors," said Seo-Kyu Lee, CEO, PIXELPLUS.

"We look forward to expanding our relationship with PIXELPLUS and supporting the expected boom in the security market especially in China where we aim to take a lead position," said Russell Ellwanger, CEO, TowerJazz and Chairman of TPSCo. "The leadership of PIXELPLUS in the surveillance market, combined with our best in class 65nm CIS process, enables breakthrough technology which can be applied to other markets requiring higher resolution, such as automotive sensors, in which we also have extensive manufacturing experience."

Sunday, January 10, 2016

Aphesa EMVA1288 Test Setup Demo

Aphesa publishes a Youtube video showing its EMVA1288 camera test setup:

Boyd Fowler Joins Omnivision

As Boyd Fowler's LinkedIn page says, he has left Google and now is Vice President at Omnivision.

Saturday, January 09, 2016

Heptagon Introduces Next Generation OLIVIA ToF 3DRanger

BusinessWire: Heptagon introduces OLIVIA, a complete ToF system module with an integrated microprocessor, adaptive algorithms, advanced optics, ToF sensor and light source. OLIVIA can accurately measure distance up to 2 meters in normal lighting conditions, compared with other solutions that are only able to reach similar distances in lower lighting. OLIVIA also requires 40% less power when ranging than alternate solutions.

We’re excited about the advancements OLIVIA brings to the market,” says René Kromhof, SVP of Sales and Marketing at Heptagon. “Our team is moving fast - In less than 3 months from the release of LAURA, our first product, we have introduced OLIVIA, our next generation sensor. With over 20 years’ experience in highly accurate distance mapping and 3D imaging technology, Heptagon is uniquely positioned to innovate and rapidly bring world-class products to market.

Image Sensors at 2016 EI Symposium

2016 IS&T International Symposium on Electronic Imaging to be held on Feb. 14–18 in San Francisco, CA, publishes its preliminary Program. There are many image sensor related short courses and papers:

EI13: Introduction to CMOS Image Sensor Technology

Instructor: Arnaud Darmont, APHESA

A time-of-flight CMOS range image sensor using 4-tap output pixels with lateral-electric-field control,

Taichi Kasugai, Sang-Man Han, Hanh Trang, Taishi Takasawa, Satoshi Aoyama, Keita Yasutomi, Keiichiro Kagawa, and Shoji Kawahito;
Shizuoka Univ. and Brookman Technology (Japan)

Design, implementation and evaluation of a TOF range image sensor using multi-tap lock-in pixels with cascaded charge draining and modulating gates,

Trang Nguyen, Taichi Kasugai, Keigo Isobe, Sang-Man Han, Taishi Takasawa, De XIng Lioe, Keita Yasutomi, Keiichiro Kagawa, and Shoji Kawahito;
Shizuoka Univ. and Brookman Technology (Japan)

A high dynamic range linear vision sensor with event asynchronous and frame-based synchronous operation,
Juan A. Leñero-Bardallo, Ricardo Carmona-Galán, and Angel Rodríguez-Vázquez,
Universidad de Sevilla (Spain)

A dual-core highly programmable 120dB image sensor,

Benoit Dupont,
Pyxalis (France)

Analog current mode implementation of global and local tone mapping algorithm for wide dynamic range image display,
Peng Chen, Kartikeya Murari, and Orly Yadid-Pecht,
Univ. of Calgary (Canada)

High dynamic range challenges
Short presentation by Arnaud Darmont, APHESA SPRL (Belgium)

Image sensor with organic photoconductive films by stacking the red/green and blue components,
Tomomi Takagi, Toshikatu Sakai, Kazunori Miyakawa, and Mamoru Furuta;
NHK Science & Technology Research Laboratories and Kochi University of Technology (Japan)

High-sensitivity CMOS image sensor overlaid with Ga2O3/CIGS heterojunction photodiode,
Kazunori Miyakawa, Shigeyuki Imura, Hiroshi Ohtake, Misao Kubota, Kenji Kikuchi, Tokio Nakada, Toru Okino, Yutaka Hirose, Yoshihisa Kato, and Nobukazu Teranishi;
NHK Science and Technology Research Laboratories, NHK Sapporo Station, Tokyo University of Science, Panasonic Corporation, University of Hyogo, and Shizuoka University (Japan)

Sub-micron pixel CMOS image sensor with new color filter patterns,
Biay-Cheng Hseih, Sergio Goma, Hasib Siddiqui, Kalin Atanassov, Jiafu Luo, RJ Lin, Hy Cheng, Kuoyu Chou, JJ Sze, and Calvin Chao;
Qualcomm Technologies Inc. (United States) and TSMC (Taiwan)

A CMOS image sensor with variable frame rate for low-power operation,
Byoung-Soo Choi, Sung-Hyun Jo, Myunghan Bae, Sang-Hwan Kim, and Jang-Kyoo Shin, Kyungpook National University (South Korea)

ADC techniques for optimized conversion time in CMOS image sensors,
Cedric Pastorelli and Pascal Mellot; ANRT and STMicroelectronics (France)

Miniature lensless computational infrared imager,
Evan Erickson, Mark Kellam, Patrick Gill, James Tringali, and David Stork,
Rambus (United States)

Focal-plane scale space generation with a 6T pixel architecture,
Fernanda Oliveira, José Gabriel Gomes, Ricardo Carmona-Galán, Jorge Fernández-Berni, and Angel Rodríguez-Vázquez;
Universidade Federal do Rio de Janeiro (Brazil) and Instituto de Microelectrónica de Sevilla (Spain)

Development of an 8K full-resolution single-chip image acquisition system,
Tomohiro Nakamura, Ryohei Funatsu, Takahiro Yamasaki, Kazuya Kitamura, and Hiroshi Shimamoto,
Japan Broadcasting Corporation (NHK) (Japan)

A 1.12-um pixel CMOS image sensor survey,
Clemenz Portmann, Lele Wang, Guofeng Liu, Ousmane Diop, and Boyd Fowler,
Google Inc (United States)

A comparative noise analysis and measurement for n-type and p-type pixels with CMS technique,
Xiaoliang Ge, Bastien Mamdy, and Albert Theuwissen;
Technische Univ. Delft (Netherlands), STMicroelectronics, Universite Claude Bernard Lyon 1 (France), and Harvest Imaging (Belgium)

Increases in hot pixel development rates for small digital pixel sizes,
Glenn Chapman, Rahul Thomas, Rohan Thomas, Klinsmann Meneses, Tony Yang, Israel Koren, and Zahava Koren;
Simon Fraser Univ. (Canada) and Univ. of Massachusetts Amherst (United States)

Correlation of photo-response blooming metrics with image quality in CMOS image sensors,
Pulla Reddy Ailuri, Orit Skorka, Ning Li, Radu Ispasoiu, and Vladi Koborov;
ON Semiconductor (United States)

Friday, January 08, 2016

Light Camera Demo

Mashable got a chance to see Light Co's L16 52MP array camera prototype at CES 2016. Few quotes from Mashable impressions:

"On the prototype, the photo stitching took a little while to work and froze. In the end, I didn't get to see how fast it was. When I asked Dr. Rajiv Laroia, Light's co-founder and Chief Technology Officer, how long it will take to generate a 52-megapixel image on the final product, he told me they're shooting for under a minute.

That's a long time to wait for a complete image. The Light team is going to try to make the processing as fast and instantaneous as possible, but the company's not promising anything faster than under a minute right now.
"

"I have to admit, the sample image taken by the L16 looked pretty good with lots of details when zoomed in, but it also looked like it had a lot of image noise."

Lenovo to Make Smartphone with Google Tango 3D Camera

VentureBeat reports that Lenovo and Google announce a partnership to create a smartphone with 3D camera-based Project Tango. Jeff Meredith, Lenovo VP, said the goal was to make a mainstream device scheduled to appear on the market on summer 2016.

Thursday, January 07, 2016

Socionext Shipping Dual Camera Image Processor

PRNewswire: Socionext (Fujitsu + Panasonic Semi) introduces “M-12MO” (MBG967) Milbeaut Image Processor. The MBG967, which will be available in volume shipments starting in January, is mainly targeted at smartphones and other mobile applications. It supports dual camera, the latest trend in mobile applications, along with functionalities such as low light shot and depth map generation. The expansion of dual camera capabilities in the mobile camera market has been highly anticipated because dual cameras enable new functionalities previously considered difficult with mobile cameras. These include low light shot, which integrates images from color and monochrome sensors, and the generation of depth maps, which can create background blur comparable to that of SLR cameras.

Main features of the MBG967 include:

Low light shot by dual camera: By integrating the images from color and monochrome image sensors, the MBG967 enables high-sensitivity, low-noise pictures:


High-speed, high-accuracy auto focus supports high speed “Phase Detect AF”, in addition to conventional “Contrast AF”. The MBG967 also supports “Laser AF” which has an advantage in the low light conditions. Its “Super Hybrid AF” utilizes these three AF methods in combination, always allowing faster and more accurate AF in varying conditions:

Intel Unveils R200 and ZR300 RealSense 3D Cameras

Intel announces R200 RealSense 3D camera, said to be the company's first long-range depth camera for 2 in 1s and tablets. The new camera is aimed to:
  • 3D scanning: Scan people and objects in 3D to share on social media or print on a 3D printer.
  • Immersive Gaming: Scan oneself into a game and be the character in top rated games
  • Enhanced Photography/Video: Create live video with depth enabled special effects, remove/change backgrounds or enhance the focus and color of photographs on the fly.
  • Immersive Shopping: Capturing body shape and measurements as depth data that is transformed into a digital model enabling people to virtually try on clothes.
The RealSense R200 camera is capable of capturing VGA-resolution depth information at 60 fps. The camera uses dual-infrared imagers to calculate depth using stereoscopic techniques. By leveraging IR technology, the camera provides reliable depth information even in darker areas and shadows as well as when capturing flat or texture-less surfaces. The operating range for the Intel RealSense Camera R200 is between 0.5 meters and 3.5 meters, in indoor situations. The RGB sensor is 1080p resolution at 30 fps.

A number of OEM featurs RealSense R200, including the HP Spectre x2, Lenovo Ideapad Miix 700, Acer Aspire Switch 12 S, NEC LaVie Hybrid Zero11 and Panasonic. The Intel RealSense Camera R200 is supported on all Windows 10 systems that run on 6th Generation Intel Core Processors.

RealSense ZR300 camera is an integrated unit within the new RealSense Smartphone Developer Kit. The Intel RealSense Camera ZR300 provides high-quality and high-density depth data at VGA-resolution of 60 fps. The ZR300 supports Google Project Tango spec for feature tracking and synchronization via time stamping between sensors.

Source: The Inquirer

Mobileye Requirements for Future ADAS Cameras

Mobileye CES 2016 presentation has a slide on its requests to ON Semi and Sony on the future automotive image sensors, starting from time 27:30:


Thanks to DS for the link!

Update: The CES 2016 presentation video has been posted on Youtube:

Wednesday, January 06, 2016

NVIDIA Presents Deep Learning Automotive Imaging Platform

NVIDIA launches NVIDIA DRIVE PX 2, said to be the world’s most powerful engine for in-vehicle artificial intelligence. DRIVE PX 2 can process the inputs of 12 video cameras, plus lidar, radar and ultrasonic sensors. It fuses them to accurately detect objects, identify them, determine where the car is relative to the world around it, and then calculate its optimal path for safe travel.

The company's Youtube promo video presents the new processor:

14nm Ambarella Camera SoC Consumes 2W in 4K 60fps Mode

BusinessWire: Ambarella introduces the H2 and H12 camera SoCs for sports and flying cameras. 14nm process-based H2 targets high-end camera models with 4K Ultra HD H.265/HEVC video at 60 fps and 4K AVC video at 120 fps and includes 10-bit HDR video processing. 28nm process-based H12 targets mainstream cameras and offers 4K Ultra HD HEVC video at 30 fps.

With the introduction of H2 and H12 we now provide a complete portfolio of 4K Ultra HD HEVC solutions for sports and flying cameras,” said Fermi Wang, President and CEO of Ambarella.

Himax WLO Adopted in 3D Structured Light Camera

Himax announces that its Wafer Level Optics (“WLO”) laser diode collimator with integrated Diffractive Optical Element (“DOE”) has been integrated into laser projectors for next-generation structured light camera. Himax's WLO systemhas a height of less than two millimeters. The WLO component is then stacked on top of a laser diode to reduce the overall height of a coded laser projector assembly to five millimeters.

Jordan Wu, President and CEO of Himax Technologies says "We are currently collaborating with several major OEMs' product developments using our WLO as our expertise in WLO design and manufacturing enables significant size and cost reduction of coded laser projectors. For example, in an active sensing 3D camera projector, our technology can reduce the size of the incumbent laser projector module by a factor of 9, actually making it smaller than conventional camera modules. This breakthrough allows our WLO collimator to be easily integrated into next-generation smartphones, tablets, automobiles, wearable devices, IoT applications, consumer electronics accessories and several other products to enable new applications in the consumer, medical, and industrial marketplaces."

The WLO laser collimator and DOE will be manufactured by Himax’s Wafer Optics production facility in Taiwan. The first production run for 3D camera applications is scheduled for delivery and sampling by Himax's partners and select
customers in Q1 2016.

Update: GlobeNewsWire: Himax reports "higher-than-expected engineering fees from AR/VR project engagements with both current and new customers."

Tuesday, January 05, 2016

Sandisk VP Predicts 80MP Mobile Cameras This Year

EETimes: Christopher Bergey, VP & GM of Mobile and Connected Solutions, SanDisk, makes mobile imaging predictions for 2016:

"As we move into 2016, we will see major strides made towards a radically new smartphone camera market. Camera makers are pushing the boundaries of technology and exploring new areas, such as 3D cameras, massive megapixels (80MB), cameras that can take 360 degree panoramic images and video and cameras that can shoot 1,000 frames a second. 4K Ultra HD for mobile is another move to watch in 2016. Not just for video streaming, we’ll see users take advantage of this extreme resolution in new ways as they shoot and create their own content on their smartphones."

Stereo Vision Talk

IS&T Electronic Imaging (EI) Symposium publishes 2015 Stereoscopic Displays and Applications Keynote video: What is stereoscopic vision good for? by Jenny Read:

ST Presents 2nd Gen ToF Ranger Working at 940nm Wavelength

GlobeNewsWire: STMicroelectronics releases its second-generation FlightSense sensor for mobile devices. The new VL53L0 is faster, working over longer distances, and more accurate.

With its form factor of 4.4 x 2.4 x 1mm, the 6-pin packaged VL53L0 is said to be the smallest ToF module in the world, and the first to integrate a 940nm VCSEL light source, a SPAD photon detector, and an advanced microcontroller to manage the complete ranging function. Being the market's first module to use light emitted at 940nm, coupled to leading-edge infrared filters, the VL53L0 is said to deliver best-in-class ambient light immunity and is now invisible to the human eye.

"ST technology advancements in Time-of-Flight ranging sensors are enhancing the experience for millions of consumers, revolutionizing the way they take pictures and videos with their smartphones and tablets," said Eric Aussedat, GM of ST's Imaging Division. "ST introduced the first fully-integrated Time-of-Flight ranging sensor to the market in 2014, which was then successfully adopted by several leading OEMs for the laser-assisted autofocus function. Today, with the VL53L0, our next generation, ST is redefining the benchmark in ranging performance and creating the opportunity to develop new applications in robotics and the IoT."

The VL53L0 is able to perform a full measurement operation in one image frame, typically less than 30ms, at distances beyond 2m. With such performance levels the camera system can achieve instant focus in both video and burst modes, even in low-light or low-contrast scenes.

The VL53L0 is in production and available now, priced from $1.75 at the minimum order quantity of 5,000 units.

Huawei Invested $98M in Smartphone Image Processing

BusinessWire: Huawei invested $98M over three years to develop its first proprietary image sensor processor for faster focusing, higher clarity, and more accurate color shading. A leading research team was assembled in France to increase the processing speed of the camera, resulting in increasing the process bandwidth by four times with 14b precision. (Possibly, Huawei talks here about ex-TI OMAP team in Nice, France, that TI has laid off 3 years ago, and Huawei hired - ISW)

Monday, January 04, 2016

Gestigon Ports its Firmware to PMD and Inuitive Hardware

gestigon and pmdtechnologies announce a collaboration that combines Samsung’s GearVR, pmd’s CamBoard pico flexx depth sensor and gestigon’s Carnival AR/VR Interaction Suite to showcase how existing VR devices can be augmented with touchless interaction.

"pmd is proud to contribute our advanced depth sensing technology to gestigon’s effort,“ states Bernd Buxbaum, CEO of pmdtechnologies. “We are excited to bring a new VR user experience to Samsung’s GearVR headset,” confirms Moritz von Grotthuss, CEO of gestigon.

Current ways to interact with Gear VR applications are extremely limited, involving turns and nods of the head to indicate menu choices. gestigon’s Carnival SDK enables a more natural interaction by visualizing the user’s hands within the application and using gesture recognition to choose from multiple choice menus or interact with virtual objects. The Carnival SDK requires depth information generated by pmd’s pico flexx sensor, which is mounted in the front of the Gear VR headset and connected to the smartphone’s USB port:


Inuitive, a developer of 3D computer vision and image processors, and gestigon announce a collaboration to bring gesture recognition to embedded virtual reality platforms.

Using today’s head-mounted VR displays, my hands are either not visible, or the tracking is so slow and inaccurate that the hands feel more like a robot’s and not my own,” says Moritz v. Grotthuss, CEO of gestigon.

Second generation head-mounted-devices will include front-facing 3D sensors to improve realism, but component cost and power consumption are key concerns. Bringing together the Inuitive NU3000 multi-core imaging processor and gestigon gesture recognition algorithms, the collaboration between the two companies aims to address these concerns.

Our unique technology and architecture uses input from standard, low-cost cameras to efficiently generate depth maps. Now, through our collaboration with gestigon, we can offer a complete one-stop solution to our customers, shortening the development cycle,” said Shlomo Gadot, CEO and Co-Founder of Inuitive.

Inuitive’s NU3000 processor incorporates two CEVA MM3101 high-performance, low-power imaging and computer vision vector DSP cores. In addition, it integrates a dedicated hardware accelerator capable of extracting real-time depth maps from stereo vision input. The gestigon gesture recognition algorithms, based on its Carnival AR/VR Interaction Suite, are customized and optimized to run directly on this processor to provide fingertip and hand tracking, as well as gesture recognition.

Heptagon Introduces TARO 3DRanger, a Smart ToF Camera

BusinessWire: Heptagon introduces TARO, a ToF 3DRanger Smart Camera for fast, 3D depth sensing applications up to several meters. TARO is said to be the industry-first complete solution that combines a 3D ToF Camera with on-board algorithms and software. TARO improves upon Heptagon’s successful SwissRanger line of industrial ToF cameras, with a match-box sized footprint, on-board image processing for fast, easy-to-use 3D depth information, and the ability to download new algorithms into the TARO hardware. This makes TARO a powerful and versatile platform for retail analytics like people counting, occupancy detection, and intruder detection.

Omnivision CameraCubeChip Features in Eye Tracking Device

PRNewswire: OmniVision's global shutter CameraCubeChip will be integrated into SMI eye tracking platform. Using OmniVision's 3 um OmniPixel3-GS global shutter pixel, the OVM6211 features high sensitivity in NIR, minimizing the required power budget of the NIR light source. The OVM6211 captures full resolution 400x400 pixel video at 120 fps. CameraCubeChip technology enables the OVM6211's extremely compact form factor of 3.23 x 3.23 x 3.92 mm for space constrained devices.

A year-old Youtube video presents SMI and Omnivision partnership:

Sunday, January 03, 2016

PhaseOne Camera Features 100MP MF Sony Sensor

dpreview.com: Phase One announces that in collaboration with Sony it has designed a new 100MP full-frame medium format CMOS sensor for its Phase One XF 100MP Camera:


Update: Phase One also announces 100MP XU-R 1000 and iXU 1000 aerial cameras featuring the same sensor: