Thursday, September 21, 2017

Auger Excitation Shows APD-like Gains

A group of UCSD researchers publishes an open-access Applied Physics Letters paper "An amorphous silicon photodiode with 2 THz gain‐bandwidth product based on cycling excitation process" by Lujiang Yan, Yugang Yu, Alex Ce Zhang, David Hall, Iftikhar Ahmad Niaz, Mohammad Abu Raihan Miah, Yu-Hsin Liu, and Yu-Hwa Lo. The paper proposes APD-magnitude gain mechanism in by means of 30nm-thing amorphous Si film deposited on top of the bulk silicon:


"APDs have relatively high excess noise, a limited gain-bandwidth product, and high operation voltage, presenting a need for alternative signal amplification mechanisms of superior properties. As an amplification mechanism, the cycling excitation process (CEP) was recently reported in a silicon p-n junction with subtle control and balance of the impurity levels and profiles. Realizing that CEP effect depends on Auger excitation involving localized states, we made the counter intuitive hypothesis that disordered materials, such as amorphous silicon, with their abundant localized states, can produce strong CEP effects with high gain and speed at low noise, despite their extremely low mobility and large number of defects. Here, we demonstrate an amorphous silicon low noise photodiode with gain-bandwidth product of over 2 THz, based on a very simple structure."

Wednesday, September 20, 2017

Yole on iPhone X 3D Innovations

Yole Developpement publishes its analysis of iPhone X 3D camera design and implications "Apple iPhone X: unlocking the next decade with a revolution:"


"The infrared camera, proximity ToF detector and flood illuminator seem to be treated as a single block unit. This is supplied by STMicroelectronics, along with Himax for the illuminator subsystem, and Philips Photonics and Finisar for the infrared-light vertical-cavity surface-emitting laser (VCSEL). Then, on the right hand of the speaker, the regular front-facing camera is probably supplied by Cowell, and the sensor chip by Sony. On the far right, the “dot pattern projector” is from ams subsidiary Heptagon... It combines a VCSEL, probably from Lumentum or Princeton Optronics, a wafer level lens and a diffractive optical element (DOE) able to project 30,000 dots of infrared light.

The next step forward should be full ToF array cameras. According to the roadmap Yole has published this should happen before 2020.
"

Luminar on Automotive LiDAR Progress

OSA publishes a digest of Luminar CTO, Jason Eichenholz, talk at 2017 Frontiers in Optics meeting. Few quotes:

"Surprisingly, however, despite this safety imperative, Eichenholz pointed out that the lidar system used (for example) in Uber’s 2017 self-driving demo has essentially the same technical specifications as the system of the winning vehicle in DARPA’s 2007 autonomous-vehicle grand challenge. “In ten years,” he said, “you have not seen a dramatic improvement in lidar systems to enable fully autonomous driving. There’s been so much progress in computation, so much in machine vision … and yet the technology for the main set of eyes for these cars hasn’t evolved.”

On the requirements side, the array of demands is sobering. They include, of course, a bevy of specific requirements: a 200-m range, to give the vehicle passenger a minimum of seven seconds of reaction time in case of an emergency; laser eye safety; the ability to capture millions of points per second and maintain a 10-fps frame rate; and the ability to handle fog and other unclear conditions.

But Eichenholz also stressed that an autonomous vehicle on the road operates in a “target-rich” environment, with hundreds of other autonomous vehicles shooting out their own laser signals. That environment, he said, creates huge challenges of background noise and interference. And he noted some of the same issues with supply chain, cost control, and zero error tolerance.

Eichenholz outlined some of the approaches and technical steps that Luminar has adopted in its path to meet those many requirements in autonomous-vehicle lidar. One step, he said, was the choice of a 1550-nm, InGaAs laser, which allows both eye safety and a good photon budget. Another was the use of an InGaAs linear avalanche photodiode detector rather than single-photon counting, and scanning the laser signal for field coverage rather than using a detector array. The latter two decisions, he said, substantially reduce problems of background noise and interference. “This is a huge part of our architecture.


Wired UK publishes a video interview with LiDAR CEO Austin Russell:

Tuesday, September 19, 2017

Functional Safety in Automotive Image Sensors

ON Semi publishes a webinar on Evaluating Functional Safety in Automotive Image Sensors:

Exvision High-Speed Image Sensor-Based Gesture Control

Exvision, a spin-off from University of Tokyo's Ishikawa-Watanabe Laboratory, demos gesture control from far away, based on a high speed image sensor (currently, 120fps Sony IMX208):



SensL Demos 100m LiDAR Range

SensL publishes a demo video of 100m LiDAR based on its 1 x 16 photomultiplier imager scanned in 5 x 80 deg angle:

3D Camera Use Cases

Occipital publishes few videos on a 3D camera use cases:



OmniVision Announces Automotive Reference Design

PRNewswire: OmniVision announces an automotive reference design system (ARDS) that allows automotive imaging-system and software developers to mix and match image sensors, ISPs and long-distance serializer modules.

The imaging-system industry is anticipating significant growth in ADAS, including surround-view and rear-view camera systems. NCAP mandates all new vehicles in the U.S. to be equipped with rear-view cameras by 2018. Surround-view systems (SVS) are also expected to become an even more popular feature for the luxury-vehicle segment within the same timeframe. SVSs typically require at least four cameras to provide a 360-degree view.

OmniVision's ARDS demo kits feature OmniVision's 1080p60 OV2775 image sensor, optional OV495 ISP and serializer camera module. The OV2775 is built on 2.8um OmniBSI-2 Deep Well pixel with a 16-bit linear output from a single exposure.

Monday, September 18, 2017

Samsung to Start Mass Production of 1000fps 3-Layer Sensor

ETNews reports that Samsung follows Sony footsteps to develop its own 1000fps image sensor for smartphones:

"Samsung Electronics is going to start mass-producing ‘3-layered image sensor’ in November. This image sensor is made into a layered structure by connecting a system semiconductor (logic chip) that is in charge of calculations and DRAM chip that can temporarily store data through TSV (Through Silicon Via) technology. Samsung Electronics currently ordered special equipment for mass-production and is going to start mass-producing ‘3-layered image sensor’ after doing pilot operation in next month.

SONY established a batch process system that attaches a sensor, a DRAM chip, and a logic chip in a unit of a wafer. On the other hand, it is understood that Samsung Electronics is using a method that makes 2-layered structure with a sensor and a logic chip and attaches DRAM through TC (Thermal Compression) bonding method after flipping over a wafer. From productivity and production cost, SONY has an upper hand. It seems that a reason why Samsung Electronics decided to use its way is because it wanted to avoid using other patents.
"

Turkish Startup Demos CMOS Night Vision

Ankara, Turkey-based PiKSELiM demos low-light sensitivity of its 640x512 CMOS sensor operating in the global shutter mode at 10fps and using an f/0.95 C-mount security camera optics:

Magic Leap Valuation to Grow to $6B

Bloomberg reports that AR headset startup Magic Leap is in the process of raising a new financing round of more than $500M at the valuation close to $6B. The company has already raised more than $1.3B in the previous rounds valuing it at $4.5B.

"According to people familiar with the company’s plans, the headset device will cost between $1,500 and $2,000, although that could change. Magic Leap hopes to ship its first device to a small group of users within six months, according to three people familiar with its plans."

Sunday, September 17, 2017

Haitong Securities Forecasts Smartphones with 3D Sensing Market $992.5B in 2020

InstantFlashNews quotes a number of sources in Chinese language saying that Haitong Securities analysts forecast the global sales of smartphones equipped with 3D sensors to reach $992.5B in 2020. The sales of smartphones with front structured light camera will be $667.8B, while the sales of smartphones with rear ToF camera will take $324.7B.

Haitong Securities estimates iPhone X 3D structured light components cost at ~$15, with 3D image sensor ~$3, TX component ~$7, RX ~$3, and system module about $2.

Image Sensors in AR/VR Devices

Citibank publishes a nice market report on Augmented and Virtual Reality dated by October 2016. The report emphasize a large image sensing content in almost all AR/VR devices:

Saturday, September 16, 2017

Espros Keeps Improving its ToF Sensors

Espros September 2017 Newsletter updates on the company progress with its ToF solutions:

"A real breakthrough was achieved in the field of camera calibration. Our initial goal was to simply find the optimum procedure to calibrate a DME660 camera. The result however is a revolutionary finding, that not only includes the compensation algorithm but also a simple desktop hardware for distance calibration.

No need any more for large target screens and moving stages! Simply put your camera in a shoebox sized flat field setup and calibrate the full distance range with help of the on-chip DLL stage. Done!
"


"You won't recognize our epc660 flagship QVGA imager in version 007! Improved ADC performance, 28% higher sensitivity, as well as low distance response non-uniformity (DRNU) of a few centimeters only (uncalibrated). We took 3 rounds (versions 004-006) in the fab transfer process and did not let go before we got it right."

The company also presents a preliminary data on its ToFCam 635 module:

Friday, September 15, 2017

iPhone X 3D Camera Cost Estimated at 6% of BOM

GSMArena, MyFixGuide quote Chinese site ICHunt.com estimating Apple iPhone X 3D camera components cost at $25 out of the whole BOM of $412.75:

Digitimes on iPhone X Influence on the Industry

Digitimes believes that iPhone X "new features... such as 3D sensing are likely to become new standards for next-generation smartphones launched by Android-based smartphone vendors. The demand for 3D sensor modules is likely to experience an explosive growth in 2018-2019. Major players in the Android camp, including Samsung and Huawei, certainly will jump onto the bandwagon."

Meanwhile, the smaller Android phone makers have jumped on this bandwagon even faster. Doogee Mix 2 already presents the face authentication based on its front 3D stereo camera:

IR Sensor Consumes No Power till Specific Wake-up Scene Detected

IEEE Spectrum, DARPA: Northeastern University, Boston, MA researchers publish Nature Photonics paper "Zero-power infrared digitizers based on plasmonically enhanced micromechanical photoswitches" by Zhenyun Qian, Sungho Kang, Vageeswar Rajaram, Cristian Cassella, Nicol McGruer & Matteo Rinaldi.

"It consists of a tiny, micromechanical switch that controls the connection to a battery. Only when the switch is activated by the infrared radiation does it move to close the gap between itself and its battery, triggering the wake-up signal.

The switch contacts are supported by beams made out of a two-material stack. When the temperature of this structure increases, one material expands more than the other, and therefore the beams bend,” Rinaldi explains. That bending allows the switch to make contact with the battery and spit out a signal.
"


What is really interesting about the Northeastern IR sensor technology is that, unlike conventional sensors, it consumes zero stand-by power when the IR wavelengths to be detected are not present,” said Troy Olsson, manager of the N-ZERO Program in DARPA’s Microsystems Technology Office. “When those IR wavelengths are present and impinge on the Northeastern team’s IR sensor, the energy from the IR source heats the sensing elements which, in turn, causes physical movement of key sensor components. These motions result in the mechanical closing of otherwise open circuit elements, thereby leading to signals that the target IR signature has been detected.

The technology features multiple sensing elements—each tuned to absorb a specific IR wavelength,” Olsson noted. “Together, these combine into complex logic circuits capable of analyzing IR spectrums, which opens the way for these sensors to not only detect IR energy in the environment but to specify if that energy derives from a fire, vehicle, person or some other IR source.

Thursday, September 14, 2017

40 Years in Imaging

Albert Theuwissen writes about his early CCD projects in 1970s and what was considered to be the cutting edge in imaging in that time.

Wednesday, September 13, 2017

Yole Thoughts on iPhone X 3D Camera

EETimes' Junko Yoshida interviews Pierre Cambou, activity leader for imaging and sensor at Yole Développement:

"Cambou acknowledged that he was surprised to see the solution “way more complex than initially envisioned.” Building blocks inside the iPhone X, designed to enable Apple’s TrueDepth camera, include a structured light transmitter, a structure light receiver on the front camera and a time-of- flight/proximity sensor. Cambou said, “Apple managed to have so many technologies, and players behind those technologies, to work together for a very impressive result.”

Cambou said, “Well done indeed, if they were able to do such complex assembly.”

The Yole analyst suspects that STMicroelectronics is supplying the infrared camera and the proximity sensor. Apple might have sourced the front camera and the dot projector from AMS, he added.

While admitting that Apple isn’t — after all — using in iPhone X “ST’s SPAD imager as I dreamed,” Cambou conceded, “Apple combined admirably all the available technologies.

Automotive LiDAR Market Overview

Semiconductor Engineering publishes an article "LiDAR Market Continues To Percolate." Few quotes:

"It’s too early to tell how market share for automotive LiDAR is shaping up, as the bigger vendors are still working to make sensors cost-efficient for use in advanced driver-assistance systems and automated driving.

Market research firms are issuing hockey-stick analyses on the LiDAR market’s potential growth. Grand View Research forecasts the worldwide automotive LiDAR market will be worth $223.2 million by 2024.

BIS Research estimates the automotive LiDAR market was worth $65 million last year. It will show double-digit compound annual growth over the next decade, according to the firm.
"

PhaseOne Trichromatic MF Sensor

Working closely with Sony, Phase One introduces IQ3 101MP Trichromatic medium format digital back. It uses "a new CMOS sensor and Bayer Filter color technology, available only through Phase One, we have given the photographer 101-megapixels of creative possibility in never-before possible color definition." It is said to be capable of replicating, closer than ever, the color definition that the human eye sees.

"Designed around the concept of mimicking the dynamic color response of the human eye, we have physically customized the Color Bayer Filter on the 101-megapixel sensor to tailor the color response. This allows the Digital Back to capture color in a new way, unlike anything else.

The Phase One Trichromatic Philosophy is a promise that where color and quality can expand, while others may be satisfied with what they have, Phase One will always strive for perfection.
"

There is not much more info released about the new image sensor:

Tuesday, September 12, 2017

PMD and SensibleVision Present 3D Face Authentication Solution for Smartphones

PMD and SensibleVision announced a technology partnership to create a modern, mobile 3D facial recognition platform.

With our leading 3D facial authentication solution, all handset makers can now transform the way people access and interact with their devices – and keep pace with or even move ahead of Apple,” says George Brostoff, co-founder and CEO of SensibleVision. “The quality of the data from the pmd ToF technology is amazing. Combing our 3D recognition with the 3D sensors allows perfect operation in the brightest sunlight and the darkest rooms with amazing speed and accuracy.

The combination of all the partner’s skills lead to an astonishing small, robust, fast and effective 3D authentication solution for mobile devices. As Apple seems to predefine the future of innovative authentication solutions, we’re thrilled to enable OEMs with pmd depth sensors and SensibleVision’s 3DVerify solution to a rapid and efficient integration into their devices,” says Bernd Buxbaum, founding CEO of pmdtechnologies.

More Details on iPhone X 3D Camera

From Apple iPhone X official video:


Apple iPhone X Official Details

Mashable: Apple officially unveils its iPhone X featuring "True Depth Camera System" based on structured light and Face ID unlock. A double tap on the side button is necessary to activate Face ID system. The chances for unlocking for a wrong face are said to be 1:1,000,000:

Espros Ports its CCD on CMOS Process to TSMC Fab

ESPROS Photonics announces the manufacturing process for its generation of ToF and spectral sensing chips from ESPROS Photonics has been finalized and frozen for mass production effective August 15 2017. This completes a customization project conducted during the last 18 months of the future ToF sensors epc660, epc635 and epc611 among a series of ESPROS customer specific imagers at SSMC, a TSMC affiliated fab.

ESPROS Photonics has developed BSI technology for high QE in the NIR and high performance CCD on a CMOS process. The achieved QE is almost 90% at 850nm and 75% at 905nm. ESPROS line of products with 8×8, 160×60 and 320×240 pixel resolution is based on the same pixel and process. The production of these imagers is now established in the TSMC facility and released by ESPROS.

ESPROS CEO and founder Beat De Coi states: «After an intensive technology research & development, product design and market introduction effort, our next goal for the time-of-flight sensor products was to establish a robust supply chain, which would be able to handle the projected industry growth. With the completion of the customization project and the accomplishment of the process freeze with the global foundry leader TSMC, we are concluding a many year effort to establish this product portfolio. This is a major milestone in the development of our young company and for our customers success.»

Maria Marced, President TSMC Europe adds: «We are proud to support ESPROS with this customization project, and together we have successfully achieved Process Release to Production. TSMC’s strength in Analog, Mixed Signal and Sensor manufacturing has enabled us to achieve this milestone in the shortest possible time. We are now ready for mass production and we look forward to a long and successful collaboration with ESPROS.»

Monday, September 11, 2017

TrendForce Forecasts 3D Sensing Market Explosion

TrendForce anticipates that from 2017 onward, the market for 3D sensing solutions in mobile devices will witness leaping growth. The total value of the global market for 3D sensing modules used in mobile devices is estimated to reach $1.5b in 2017 and is forecast to grow at a massive CAGR of 209% to around $14b in 2020.

Based on an analysis model that includes iPhone-driven demand, the global market for 3D sensing modules used in mobile devices is projected to register a spectacular annual growth rate of 703% in the total value for 2017,” said TrendForce analyst Jason Tsai. iPhone with 3D sensing would generate significant interests in the related hardware from Samsung, Huawei and other smartphone brands. “As 3D sensing apps mature, smartphone makers will also accelerate the incorporation of related hardware into their mainstream offerings. TrendForce therefore expects another demand surge in the mobile 3D sensing market in 2019.

Tsai added that the 3D sensing feature on smartphones is currently used mainly for tasks that involve facial recognition of the user, such as unlocking the device and mobile payment.

Sunday, September 10, 2017

SmartSens Improves Its Sensors Sensitivity

China-based SmartSens reports that it has improved NIR sensitivity of its 5MP 2um TSI pixel SC5035 sensor:


Also, SmartSens announces a new ILLUMi low-light technology:

"ILLUMi is an innovative pixel technology that has more than twice the sensitivity of ordinary sensors in visible and near infrared light areas and can be used to capture stunning high-quality full-color images. SmartSens Technology's three star-level products can be used to capture color images in very dark light, while the minimum illumination required to take black and white images close to 0Lux, which means iLLUMi can be invisible in the human eye can not see the night Shoot video."

KGI on Apple iPhone "Face ID" Internals

AppleInsider quotes KGI analyst Ming-Chi Kuo on the oncoming iPhone "Face ID" design. It employs 4 cameras: a regular front camera, structured light 3D camera and proximity ToF sensor:


"According to Kuo, Apple's system relies on four main components: a structured light transmitter, structure light receiver, front camera and time of flight/proximity sensor.

Kuo points out that structured light transmitter and receiver setups have distance constraints. With an estimated 50 to 100 centimeter hard cap, Apple needs to include a proximity sensor capable of performing time of flight calculations. The analyst believes data from this specialized sensor will be employed to trigger user experience alerts. For example, a user might be informed that they are holding an iPhone too far or too close to their face for optimal 3D sensing.
"

Inuitive Introduces NU4000 3D Vision Processor

PRNewswire: Inuitive introduces the NU4000, a multi-core vision processor that supports 3D Imaging, Deep Learning and Computer Vision processing for AR and VR, Drones, Robots and other applications. This next generation processor enables high quality depth sensing, "On-chip SLAM," Computer Vision and Deep Learning (CNN) capabilities.

NU4000 provides computing power exceeding a total of 8 Terra OPS, said to be the most powerful vision processor available:
  • 3 Vector Cores that provide 500 Giga OPS
  • A dedicated CNN processor that exceeds 2 Terra OPS enabling deep neural networks such as VGG16 reaching 40 frames (ROIs) per second at 10 times less power of the equivalent GPU, DSP or FPGA implementations
  • 3 Powerful CPU Cores that provide more than 13,000 CoreMark
  • Depth processing engine that delivers a throughput of 120Mp/s and supports multiple simultaneous streams of stereo and structured light
  • SLAM engine enabling accurate key point extraction at 120fps from 2 cameras simultaneously
  • Advanced Time-Warp HW engine that reduces the Motion-to-Photon latency to 1msec for extensive VR and MR use cases
  • More than 3MB of on-chip SLAM servicing the vision cores
  • High throughput LPDDR4 interface that reduce external memory access bottlenecks
  • Connects to 6 cameras and 2 displays
  • Chip area 7 x 8mm2 in 12nm process

Saturday, September 09, 2017

Dual Camera Trend Reaches Extreme Low-End Smartphones

DeviceSpecifications: These days, dual rear camera is used even in very low end phones, such as this one:

Mapping Imaging Array Temperature

TU Delft and Harvest Imaging publish an open access paper "Temperature Sensors Integrated into a CMOS Image Sensor" by Accel Abarca, Shuang Xie, Jules Markenhof, and Albert Theuwissen. Apparently, the paper is a continuation of the MSc thesis "Integrating a Temperature Sensor into a CMOS Image Sensor."

"The test image sensor consists of pixels and temperature sensors pixels (=Tixels). The size of the Tixels is 11 μm × 11 μm. Pixels and Tixels are placed next to each other in the active imaging array and use the same readout circuits. The design and the first measurements of the combined image-temperature sensor are presented."

Can Machine Learning Overcome Absence of Lens?

Researches from University of Utah at Salt Lake City proved that AI can be somewhat successful in distinguishing between the digits as seen by image sensor with no lens. "Lensless-camera based machine learning for image classification" paper by Ganghun Kim, Stefan Kapetanovic, Rachael Palmer, and Rajesh Menon is published by arxiv.org. From the abstract:

"Finally, we demonstrated that the trained ML algorithm is able to classify the digits with accuracy as high as 99% for 2 digits. Our approach clearly demonstrates the potential for non-human cameras in machine-based decision-making scenarios."

Friday, September 08, 2017

SEMI European Imaging & Sensors Summit 2017

SEMI European Imaging & Sensors Summit 2017 is to be held on September 20-22 in Grenoble, France. The summit agenda has many interesting presentations:

  • Sensor as a solution
    Chae Lee, Senior VP, LG Electonics
  • In the era of mixed reality and augmented environment sensing, what does innovation mean? What is Leti’s vision?
    Marie Noelle Semeria, CEO, Cea-Leti
  • 3D Time-of-Flight Cameras in Industrial Applications
    Thomas Kuhnke, Senior Electronics Design Engineer, 3D Business, Basler AG
  • Hybrid semiconductor direct conversion CMOS x-ray imaging detectors with application examples
    Tuomas Pantsar, Chief Technology Officer / Charge Integration, Oy Ajat
  • CMOS based microdisplays, imager and sensors enhanced by OLED/OPD integration
    Philipp Wartenberg, IC Design Engineer and Project Manager / Deputy head of department IC and System Design, Fraunhofer Institute for Organic Electronics, Electron Beam and Plasma Technology FEP
  • TSMC Imaging Technology to Enable Innovation in AR/VR Applications
    Tripti Bhanti, Senior Technical Manager, TSMC Europe BV
  • Fusion Bonding Enabler for Backside Illuminated Image Sensors - What’s Next
    Thomas Uhrmann, Director of Business Development, EVG
  • Technological Trends in Thermal Image Sensors
    Christel-loic Tisse, Technical & Innovation Director, ULIS
  • Packaging of Image Sensors and Devices
    Lutz Mattheier, Manager Assembly Technology Development, First Sensor Microelectronic Packaging GmbH
  • Image Fusion: How to Best Utilize Dual Cameras
    Roy Fridman, Director of Product Marketing, Corephotonics
  • High Performance CMOS Integrated Graphene Photodetectors
    Tapani Ryhänen, CEO, Emberion
  • CMOS image sensor for bio-medical and scientific applications
    Renato Turchetta, CEO, IMASENIC
  • New developments in logarithmic pixels and sensors
    Yang Ni, Founder & CTO, NIT
  • Colour X-ray Imaging Solutions for Non-Destructive Testing based on Photon Counting Technology
    Juha Kalliopuska, Chief Executive Officer and Co-founder, ADVACAM
  • High resolution global shutter image sensors for machine vision and 8K video
    Guy Meynants, Engineering Fellow, ams/CMOSIS
  • VTT’s hyperspectral imaging technology - from high-performance applications towards volume scalability
    Anna Rissanen, Research Team Leader, VTT
  • Image Sensors for Future VR
    Yiwan Wong, Partnership Lead, Oculus Research
  • Ambient Sensing – New ways how intelligent devices can cope with the environment
    Roland Helm, Segment Head Sensors, Infineon Technologies AG

Leti Announces High Resolution Fingerprint Pressure Sensing Technology

Leti announces readiness of the new technology of very high-resolution (>1000 dpi) fingerprint sensor based on a matrix of interconnected piezoelectric ZnO nanowires:

Sony CineAlta Venice Features Full-Frame 4K 60fps Sensor

Sony announces Venice - the next generation digital cinema camera featuring 4K full-frame sensor capable of 60fps or 6K at 30fps speed. Venice is said to have a high speed readout which minimizes the jello effects typical in the CMOS sensors. The new sensor DR is promised to achieve 15+ stops.