Editor’s Note: This article comes from WeChat WeChat official account "Haitong Electronics Research" (ID: htzqdz) and is written by Haitong Securities Electronics Team.
Analysis of Apple 3D Visual Structured Light Scheme
Analysis of PrimeSense (Apple Acquisition) Scheme of Structured Light Pioneer
PrimeSense, an Israeli company, was founded in 2005. In 2006, it developed a 3D sensor, and established contact with Microsoft at the E3 exhibition that year. At the E3 exhibition in 2009, Microsoft released the Kinect generation with a built-in PrimeSense 3D sensor. At the end of 2010, PrimeSense cooperated with ASUS to develop Xtion Pro, which used the same 3D sensor as Kinect. In 2012, PrimeSense introduced Capri, the world’s smallest 3D sensor at that time. Since Primesense was acquired by Apple in 2013 and no longer exported technology, Microsoft began to use its own 3D sensor on Kinect 2.0 in 2014.
The core technology of Primesense is Light Coding optical coding technology, which belongs to one kind of structured light technology and is the most representative structured light technology at present. Structured light technology is to code the space to be measured with light source illumination, project a one-dimensional or two-dimensional specific image onto the measured object, and judge the surface shape and depth information of the measured object according to the deformation of the image. Different from the general structured light scheme (such as Intel Realsense structured light scheme), the light source of Light Coding is called "laser speckle", which is a diffraction spot formed after the laser irradiates the DOE diffraction grating. As long as such structured light speckle is put in the space, the whole space will be marked. If an object is put into this space, the position information of the object can be known by analyzing the speckle pattern on the object, so that the depth information can be collected and captured very quickly.
Primesense’s Light Coding is to emit laser with infrared rays, form laser spots through the DOE (Diffraction Grating) in front of the lens, and then evenly distribute and project them in the measurement space, and then record each speckle in the space through the infrared camera to obtain the original data, and then calculate it into an image with 3D depth through the chip.
Therefore, disassembling the first generation of Microsoft Kinect1 products, we can see that its main components include: infrared emitter, infrared CIS sensor, visible light sensor and core data processing chip (PS1080).
The workflow of the Primesense structured light scheme is as follows: an infrared laser emits near-infrared light, which is encoded by optical devices such as DOE to form laser speckle, and then projected onto space objects; After the laser spot is projected on the object, the displacement changes. Infrared CIS collects the deformed spot information and transmits the information to the PS1080 chip. The Z-axis depth information of the object can be obtained through the calculation of the algorithm in the chip. Visible light CIS collects the XY-axis plane image information of the object, and also transmits it to the PS1080 chip. The three-dimensional information of the object in space can be obtained by comprehensive processing of the plane information and depth information.
On the chip side, the PS1080 system-level SOC chip made by Primesense is adopted, which can provide images with 640*480 resolution, with an x/y plane resolution of 3 mm (at a distance of 2 m) and a depth accuracy of 1cm. PS1080 chip has superior parallel computing logic ability, which can control near-infrared light source, encode images and actively project near-infrared spectrum. At the same time, the projected Light Coding infrared spectrum is received by a standard infrared CMOS image sensor and the encoded reflected speckle image is transmitted to PS1080, which processes it and generates a depth image.
Overall Structure Analysis of 3D Visual Structured Light Scheme Products
By disassembling the product structure of Primesense, a pioneer of structured light, it can be seen that the whole structured light product scheme mainly consists of four parts: TX projector (mainly infrared light emitter IR LD), RX receiver (mainly infrared light image sensor IR CIS), RGB visible light image sensor (Vis CIS) and special data Processor Chip.
The product structure of 3D visual structured light scheme can also be confirmed in Intel Realsense short-range structured light scheme. Intel Realsense close-range 3D vision scheme is mainly based on the principle of structured light, which consists of an infrared emitter, an infrared sensor, a visible light color sensor and a real-world image processing chip. The infrared emitter emits near infrared light to the surface of the object, and the infrared sensor and the color sensor collect the depth image and the plane image of the object respectively, and finally get the three-dimensional position information through the processing of the real sensor chip.
Therefore, we can conclude that the working principle of a typical structured light 3D vision system is as follows: firstly, an infrared laser emitter (IR LD) emits a specific pattern of near infrared light (such as laser speckle), and after being reflected by an object (such as a human hand or a human face), the deformed pattern is received by an infrared image sensor (IR CIS), and the position (Z axis) of the human hand/face is calculated by an algorithm; At the same time, the visible light image sensor collects the hand/face information (Vislight) in two dimensions (X and Y axes). The information of two image sensors is collected into a special image processing chip, so as to obtain three-dimensional data and realize spatial positioning.
Deep Disassembly and Supply Chain Analysis of 3D Visual Structured Light Scheme
3D Visual Structured Light Scheme ——TX Infrared Emitting Part
TX infrared light emitting part is one of the important components of the whole 3D vision, which provides the core near-infrared light source, and the quality of the emitted image is very important to the whole recognition effect. Compared with TOF scheme, 3D vision with structured light scheme is much more complicated, mainly because the structured light scheme needs pattern images (such as laser speckle, etc.) for spatial identification, so customized DOE (diffraction grating) and WLO (wafer-level optical lens, including beam expander, collimator and projection lens, etc.) are needed.
The working principle of the whole TX transmitting part is as follows:
1) First, the laser transmitter VCSEL emits near-infrared light with a specific wavelength (generally 880nm/910nm/940nm), a Gaussian beam with good collimation and narrow cross-sectional area.
2) firstly, a Beam Shaper beam shaper beam is used to form a uniform collimated beam with a large cross-sectional area. Beam Shaper mainly includes a Beam Homogenizer and a Collection Lens. The beam expander is used to enlarge the cross-sectional area of the laser beam, so that the cross-sectional area of the laser beam can cover the diffraction elements behind it. The collimator is used to re-adjust the expanded laser beam into parallel light.
3) The laser beam passing through Beam Shaper then passes through DOE diffractive optical elements to form a specific optical pattern.
4) The optical pattern formed by DOE passes through the final Projection Lens, and then it can be emitted from the TX transmitter.
4.1.1VCSEL is the best scheme for near infrared light source.
At present, there are three kinds of near-infrared light sources that can provide 800-1000nm band: infrared LED, infrared LD-EEL (edge emitting laser diode) and VCSEL (vertical cavity surface emitting laser).
VCSEL can be said to be a kind of infrared laser LD, and its full name is vertical cavity surface emitting laser. As the name implies, it adopts vertical emission mode, which is different from other infrared LD side emission modes. The vertical structure of VCSEL is more suitable for wafer-level manufacturing and packaging testing. Compared with edge-emitting LD, VCSEL has advantages in cost after mass production and high reliability, and there is no failure mode of traditional laser structure such as dark line defect. Compared with LED, VCSEL has higher spectral quality, smaller temperature drift of central wavelength, faster response speed and obvious advantages.
Based on the comprehensive analysis of the three schemes, although the cost of LED is low, the angle of emitting light is large, so it is necessary to output more power to overcome the loss. In addition, LED can’t be modulated quickly, which limits the resolution and needs to increase the flash duration. Edge-emitting LD is also an alternative scheme for gesture recognition, but the output power is fixed, and the mode of edge-emitting is not compatible in manufacturing technology; Compared with LD-EEL, VCSEL has the advantages of low driving voltage and current, low power consumption and higher tunable frequency of light source (up to several GHz), which is compatible with compound semiconductor technology and suitable for large-scale integrated manufacturing. Especially, VCSEL has the advantages of low power consumption, high adjustable frequency and vertical emission, which makes it more suitable for consumer electronic intelligent terminals than LD-EEL.
Because the manufacturing process of VCSEL is difficult and the product cost is relatively high, with the attention of major manufacturers, especially the rapid development of high-speed optical communication, the VCSEL process has gradually matured. In recent years, VCSEL has been widely used as a laser source in the field of high-speed optical network transmission, and the current product price is very close to LD-EEL.
The manufacture of VCSEL depends on MBE (Molecular Beam Epitaxy) or MOCVD (Metal Organic Vapor Deposition) process. Multi-layer reflection and emission layers are grown on GaAs (about 80% share) or InP (about 15% share) wafers. A typical VCSEL structure includes a laser cavity, top and bottom distributed Bragg reflectors (DBR), electrodes, etc. The main parts of the laser cavity are quantum wells and optical confinement structure. Because VCSEL maInly uses group III and III compound semiconductor materials GaAs or InP (containing in, Al and other dopants), the industrial chain of mobile VCSEL is similar to that of compound semiconductor.
At present, the main VCSEL suppliers in the world include Finsar, Lumentum, Princeton Optronics, Ⅱ ⅵ and other companies, which are in the forefront of research and development of mobile VCSELs. The specific production can be divided into IDM and OEM. Under the OEM mode, companies such as IQE, Quanxin and Lianya Optoelectronics provide EPI epitaxial wafers of group III and III compounds, and then companies such as Hongjieke and Wenmao manufacture wafers, and then they become independent VCSEL devices after being sealed and tested by companies such as Lianjun, Silicon Products and Tongxin (substrate).
At present, the companies that are committed to the design of miniaturized VCSEL for mobile terminals mainly include foreign optical communication device companies such as Finsar, Lumentum, Princeton Optronics (acquired by AMS) and II VI. At home, Guangxun Technology and Huaxin Semiconductor have the design and production capacity of low-end VCSELs, and Changchun Institute of Optics and Mechanics has certain competitiveness in VCSEL technology research and development. But on the whole, there is a big gap between domestic companies and overseas giants.
In recent years, excellent entrepreneurial companies have begun to appear in VCSEL in China. For example, Zonghui Optoelectronics, which was founded in Silicon Valley in January 2016, has made high-efficiency and high-performance VCSEL chip products, covering the two bands of 850 nm and 940 nm, and is expected to be applied to the 3D visual consumer market.
According to Lumentum’s financial report briefing in the second quarter of 2017, its orders for consumer-grade VCSEL products jumped sharply from $5 million in the last quarter to $200 million. According to the analysis of industrial chains in the United States and Taiwan Province (such as BI, Science and Technology Times, etc.), the orders mainly came from Apple. We judge that Lumentum will provide VCSEL devices in 3D cameras for Apple’s next-generation iPhone8, and it is the main supplier. In addition to Lumentum, II-VI is also in Apple’s supply chain, and Finisa is also expected to join.
4.1.2DOE is very important for the structured light scheme.
In the structured light scheme of 3D vision, the depth information must be measured by using a specific pattern optical pattern (such as laser speckle), so DOE is one of the most important core components for the structured light scheme.
DOE Diffractive Optical Elements are a kind of optical elements based on the diffraction principle of light, using computer-aided design, and through the semiconductor chip manufacturing process, etching on the substrate (or the surface of traditional optical devices) to produce a stepped or continuous relief structure (usually grating structure), forming a coaxial reproduction and having high diffraction efficiency.
The basic principle of DOE is to prepare a step (grating) with a certain depth on the surface of the element by using the diffraction principle. When the light beam passes through, it produces different optical path differences, which meets the Bragg diffraction conditions. Through different designs, the divergence angle of the light beam and the shape of the light spot are controlled, and the function of forming a specific pattern of the light beam is realized. DOE is a single optical element, which can disperse the incident beam into countless beams and then emit them. Each beam emitted after dispersion has the same optical characteristics as the original incident beam, including polarization, phase and so on. DOE can generate 1D(1xN) or 2D(MxN) beam matrix, depending on the surface microstructure of DOE.
In the 3D visual structured light scheme, the function of DOE is to transform the point light source of the laser into a speckle pattern by using the diffraction principle of light. Firstly, according to the optical requirements of a specific diffraction image, a three-dimensional master mold is designed and made, and then a DOE grating is made according to the master mold. The grating surface has three-dimensional microstructure patterns, all of which are in the micron level. The linear laser emitted by the laser diffracts when it passes through the DOE. The angle and quantity of diffracted light are controlled by the pattern on the DOE, and the diffracted light spot has the information of lighting code.
DOE will also play an important role in Apple’s structured light scheme. We can analyze the patent of structured light of Apple Primesense, as shown in the figure on the right below, and the DOE diffraction grating in the transmitter assembly is the key to realize laser speckle.
The industrial chain structure of DOE diffractive optical elements mainly includes DOE optical pattern design, DOE manufacturing and processing, and optical element module packaging. In addition, it also needs two supporting auxiliary links: raw materials (mainly special Shi Ying glass, photosensitive glass, etc.) and precision optical processing equipment (such as mask aligner, etc.).
As for Apple, according to the information of Taiwan Province’s industrial chain (Zhongshi Electronic News and Digitimes, etc.), the DOE for Apple’s 3D visual structural light will be designed by Primesense, micro-nano processing of the pattern will be provided by TSMC, ITO materials will be provided by Caiyu, and device packaging will be provided by Jingcai Technology.
At present, there are not many companies with advanced DOE design and manufacturing, and the main suppliers in the world are CDA in Germany, Silios in France, Holoeye in Germany, etc., especially in the field of mobile terminal micro-DOE devices, no related products have been seen. According to the analysis of Taiwan Province Science and Technology Media Zhongshi Electronic News, Qualcomm is currently actively developing a 3D visual structured light scheme, and in DOE and WLO, it will adopt the scheme of Himax Qijing Optoelectronics. Domestically, there is no company with DOE design and processing capabilities.
4.1.3 Wafer-level optical element WLO is the core component.
According to our previous analysis of the TX part of structured light, the near-infrared light emitted by VCSEL first passes through a Beam Shaper (mainly including beam expander and collimating element Collection Lens) to form a uniform collimated beam with a large cross-sectional area. Then the optical pattern formed by DOE passes through the final Projection Lens before it can be emitted from the TX transmitting part.
At present, when 3D visual structured light scheme is used in somatosensory interactive products, the requirements for the device size are not high. For example, Intel Realsense front structured light products use ordinary optical lenses and DOE devices, and the device size is large.
In order to apply the structured light scheme to mobile consumer electronic products, the emitter device needs to be compressed in volume and size, so the Beam Shaper beam shaper and the Projection Lens are both fabricated by WLO (wafer-level optical device) process.
WLO wafer-level optical devices refer to wafer-level lens manufacturing technology and process. Different from the processing technology of traditional optical devices, WLO process is characterized by small size, low height and good consistency by copying and processing lenses in batches on a whole glass wafer with semiconductor technology, and pressing multiple lens wafers together and then cutting them into a single lens. The position accuracy between optical lenses reaches nm level, which is the best choice for standardized optical lens combination in the future.
Different from traditional optical lens processing, WLO process is more suitable for mobile consumer electronic devices. Especially in the case of complex structure of 3D vision transmitter, the optical device adopts WLO technology, which can effectively reduce the volume and space, at the same time, the device has good consistency and high beam quality, and the semiconductor technology has cost advantage after mass production.
Different from the simple process and technology commonly used in the design and processing of traditional optical lenses, WLO process is more complicated because it uses semiconductor technology and design ideas to manufacture optical devices. Both the design process and the processing link need more advanced design ideas and more precise processing, so the corresponding processing has high added value.
According to the information of American industrial chain (such as TechCrunch, etc.), Heptagon (acquired by AMS Austria Microelectronics) will provide WLO wafer-level optical lens for TX emitter, which is mainly because Heptagon has accumulated many patents in the field of WLO design and has strong technical strength. In addition, according to TechCrunch’s analysis, Himax Qijing Optoelectronics from Taiwan Province is also a potential supplier in the future.
Domestically, semiconductor packaging and testing plants Huatian Technology and Jingfang Technology laid out earlier in WLO, mainly providing post-WLO processing technology, especially Huatian Technology, which has mature processing technology and is expected to benefit in the future.
3-D Visual Structured Light Scheme ——RX Infrared Receiving Part
In the 3D structured light scheme, the infrared receiving part of RX is mainly an infrared camera, which is used to receive infrared light reflected by objects and collect spatial information. The infrared camera mainly includes three parts: infrared CMOS sensor, optical lens and infrared narrow-band interference filter. It is similar to the mainstream visible camera in basic structure, but there are differences in specific parts: 1) Visible CMOS sensor needs to identify RGB three colors, which requires high resolution, while infrared CMOS only needs to identify near infrared light, which requires low resolution; 2) The visible light camera needs an infrared cut-off filter to cut off the infrared light and only pass the visible light, while the infrared camera only passes the near-infrared light of a specific wave band and cuts off the visible light, so it needs a narrow-band filter; 3) Because the visible camera requires high image resolution, the design of optical lens is very complicated, and the infrared camera does not require high optical lens.
4.2.1 infrared CMOS sensor needs to be specially made.
Infrared CMOS image sensor (IR CIS) is used to receive infrared light reflected by hands or faces, which is a mature device in technology. IR CIS appears in Samsung Note7 and Fujitsu ARROWS NX F-04G phones equipped with iris recognition function.
In the 3D vision scheme, infrared CMOS sensor is used to receive infrared images reflected by objects, which is consistent with visible CMOS in principle. The difference is that visible CMOS sensor needs to identify RGB three colors and present high-definition images, so it requires high resolution, while infrared CMOS only needs to identify near-infrared light in a specific band corresponding to emitted light, and its function is to obtain depth information. In the structured light scheme, it only needs to collect images after infrared pattern is reflected by objects, so it requires low resolution. At present, 2M pixel infrared CMOS can meet general 3D vision requirements (such as gesture recognition, face recognition, etc.).
Because 3D vision has just started, different manufacturers adopt different image recognition schemes and have different requirements for infrared CMOS (such as resolution, response speed, etc.), so the infrared CMOS needed in 3D vision scheme needs to be customized.
At present, infrared CMOS image sensor suppliers mainly include stmicroelectronics, Wonderland Optoelectronics, Samsung Electronics, Fujitsu, Toshiba and other companies. According to Yole’s analysis, stmicroelectronics has developed a 3D imaging infrared sensor that may be used for iPhone 8, and will start to provide infrared CMOS image sensors for Apple on a large scale in the second half of 17 years. The chip will be designed by stmicroelectronics, manufactured by TSMC, and provided by Tongxin Electric with wafer recombination (RW).
At home, there are not many companies involved in infrared CMOS sensors at present, and the layout of Sibike Company is earlier. According to the company’s public information, a special team has been set up to research and develop 2 million and 5 million CMOS image sensors with high infrared sensitivity, and three products, namely SP9250, SP9550 and SP9260, have been launched, which have improved the infrared response by about 50% compared with the traditional image sensors for camera shooting.
There is another scheme for infrared sensor, namely SPAD (Single Photon Avalanche Diode) which has been used in the distance sensor for mobile phones, and can realize the same function as infrared CMOS, and detect infrared light, such as SPAD sensor provided by stmicroelectronics used in iPhone 7.
At present, the infrared CMOS sensor has a big problem that it is difficult to dissipate heat, which makes the whole chip need to add additional metal heat sinks. Compared with infrared camera, SPAD has the advantages of simple structure, small size, low cost and good heat dissipation, but its function is limited, and the resolution is difficult to improve. SPAD can only track a small amount of infrared light, which is more than enough in distance sensors. If high-quality 3D imaging is to be realized, the effect of SPAD is not as good as that of infrared CMOS, and the existing SPAD needs technical upgrading. According to the analysis of YOLE, the upgraded SPAD can also be used as an "imager" in 3D vision as an infrared detector, thus reducing the cost and solving the heat dissipation problem.
4.2.2 Near-infrared narrow-band interference filter value enhancement
For 3D vision, there is a big difference between IR camera and RGB camera in color filter. Filters can be divided into band-pass filters, short-wave cut-off filters and long-wave cut-off filters according to their spectral characteristics. Band-pass filter means that the light in the selected specific band passes through, and the light outside the passband is cut off. According to the bandwidth, it is divided into narrow band and wide band, usually according to the value of bandwidth compared with the central wavelength. Less than 5% is narrow band and more than 5% is wide band. In 3D vision products, in order to reduce the interference of visible light, narrow-band interference filters are widely used.
The traditional RGB visible camera needs to use infrared cut-off filter to filter out unnecessary low-frequency near-infrared light, so as to avoid the influence of infrared light on the visible part and produce false colors or ripples, and at the same time, it can improve the effective resolution and color reduction. However, in order not to be disturbed by ambient light, infrared cameras need to use narrow-band filters, which only allow near-infrared light in a specific band (such as the band corresponding to the light source at the emitting end) to pass through.
At present, the near-infrared narrow-band filter mainly adopts the interference principle, which requires dozens of optical coatings, which has high technical difficulty, so it is more valuable than the traditional cut-off filter. The thin film of narrow-band filter is generally composed of two kinds of films with low refractive index and high refractive index. After superposition, the number of layers reaches dozens, and the parameter drift of each film may affect the final performance. Moreover, the transmittance of narrow-band filters is very sensitive to the loss of thin films, so it is very difficult to prepare filters with high peak transmittance and narrow half bandwidth. There are many methods to prepare thin films, including chemical vapor deposition, thermal oxidation, anodic oxidation, sol-gel method, atomic layer deposition (ALD), atomic layer epitaxy (ALE), magnetron sputtering, etc. However, the properties of thin films prepared by different methods are very different. At present, advanced narrow-band color filters are mainly made by chemical vapor deposition.
According to Barron’s Weekly, VIAVI will provide near-infrared narrow-band interference filters for Apple’s next-generation Iphone 8, and both parties have signed an order intention agreement. Apple will purchase 150 million optical filters from VIAVI for 3D vision of iPhone series. At present, in addition to VIAVI, the suppliers of near-infrared narrow-band interference filters are Buhler, Materion and Wavelength.
Domestically, Crystal Optoelectronics has strong technical strength and international competitiveness in the field of color filters, and is one of the important suppliers of color filters worldwide.
4.2.3 The requirements of optical lens used in infrared camera are not high.
Because the visible light camera requires high image resolution, the design of optical lens is very complicated. Photographing quality is not only related to the number and size of pixels in CIS, but also related to the quality and number of optical lenses. The better the quality and number of lenses, the better the imaging quality, but the number of lenses determines the height of the camera module. At present, the optical lenses of smart phones are generally composed of 5P or 6P lenses. More lenses can be used to increase the light transmittance and improve the image quality, but it also makes the lens design more complicated.
Infrared cameras have lower requirements for optical lenses than visible cameras, and have higher tolerance for light flux, distortion correction and other indicators. At present, most 3D vision products use mature ordinary lenses. Foreign optical lens suppliers include Daliguang, Yujing Optoelectronics, Kanto Chenmei, etc., and domestic companies such as Shunyu Optics, Lianchuang Electronics, Xuye and Chuanhetian can provide them.
Special Image Processing Chip-High Technical Barriers
The image processing chip needs to process the position information collected by infrared CIS and the object plane information collected by visible CIS into a three-dimensional image with depth information in a single pixel to complete 3D modeling, and its data processing and calculation complexity is higher than that of the general traditional ISP image processing chip. Therefore, most 3D vision solution manufacturers design their own solutions or cooperate with traditional ISP giants for research and development.
The chip has high technical barriers, especially the requirements on algorithm level, and needs to process depth information according to 3D vision scheme. At present, there are only a few chip giants in the world that can provide such products, including stmicroelectronics, Texas Instruments, Infineon and so on.
The first generation product of Microsoft Kinect is analyzed, and its core image processing chip is PS1080 system-level SOC chip made by Primesense. We judge that the core 3D image processing chip in Apple’s 3D vision solution will still be designed and provided by Primesense, and will be manufactured by foundries such as TSMC.
In addition to the core image processing chip, the whole 3D vision scheme also needs many auxiliary chips, such as audio processing, video processing, storage, simulation, common camera control, etc. These chips are very mature and widely used in consumer electronics products. At the same time, smart phones have been equipped with many auxiliary chips, so 3D vision can directly use the existing chips.
Visible light camera-non-new service
In 3D vision system, whether it is structured light scheme or TOF scheme, the function of infrared ray is to collect the depth Z-axis information, so as to determine the depth of field information of the object, while the plane XY-axis information of the object needs to be collected by ordinary visible light camera, so visible light camera is indispensable for 3D vision.
However, at present, smart phones are generally equipped with at least two visible light cameras (one front and one rear), so after the smart phone is equipped with 3D vision, there is no need to add additional visible light cameras, so the existing cameras on the mobile phone can be directly used. Therefore, 3D vision has not brought new increments to the visible light cameras.
System module manufacturing and assembly-difficult and high value
Because the 3D vision scheme involves many hardware parts, it needs the cooperation of four parts: infrared emitting laser, infrared receiving camera, visible light camera and image processing chip. Especially, the matching between infrared light emission and reception is very important to the recognition effect and accuracy of the whole 3D vision scheme, so the packaging and integration of the whole system module is very critical.
At present, 3D vision has been successfully applied to somatosensory interactive devices like Microsoft Kinect, but this kind of devices are bulky and have low requirements for the assembly of the whole system. With the application of 3D vision in consumer electronic products such as smart phones, the manufacture and assembly of its system modules become very important.
It is difficult to manufacture the 3D vision module of the mobile terminal, which is mainly reflected in the following aspects: 1) The precise optical elements such as DOE and WLO contained in the 1)TX transmitter need very high accuracy in assembly, and difficult coaxiality adjustment is adopted; 2) The VCSEL laser contained in the transmitter needs spectral detection and calibration; 3)TX transmitter, RX receiver and visible camera are independent of each other, and their accuracy and stability in spatial position are very critical to the final 3D imaging effect, which requires difficult matching and calibration.
According to the report of Taiwan Province Science and Technology Times, the assembly of Apple’s 3D vision module (including TX transmitter assembly, RX receiver assembly and system assembly) will be undertaken by Foxconn (system assembly and RX receiver assembly), LG Innotek(TX transmitter assembly), Sharp(RX receiver assembly) and other companies.
In Lenovo Phab2 Pro mobile phone, the module packaging and integration of 3D depth camera is completed by Shunyu Optics. Domestically, besides Shunyu Optics, camera module companies such as Shenzhen O-film Tech Co.,lt and Qiu Ti Technology also have strong technical strength.
Risk warning: the process of mobile terminal of 3D vision technology is too slow; Domestic related companies lack competitiveness.
Extended Reading: Apple 3D Visual Report: Leading the Industry Trend (Part I)
Reporting/feedback
关于作者