First Generation Detector Arrays
Photon detectors were developed to improve sensitivity and response time. These detectors have been extensively developed since the 1940’s. Lead sulfide (PbS) was the first practical IR detector. It is sensitive to infrared wavelengths up to ~3 µm.
In the late 1940’s and the early 1950’s, a wide variety of new materials were developed for IR sensing. Lead selenide (PbSe), lead telluride (PbTe), and indium antimonide (InSb) extended the spectral range beyond that of PbS, providing sensitivity in the 3-5 µm medium wavelength (MWIR) atmospheric window.
The end of the 1950’s saw the first introduction of semiconductor alloys, in the chemical table group III-V, IV-VI, and II-VI material systems. These alloys allowed the bandgap of the semiconductor, and hence its spectral response, to be custom tailored for specific applications. MCT (HgCdTe), a group II-VI material, has today become the most widely used of the tunable bandgap materials.
As photolithography became available in the early 1960’s it was applied to make IR sensor arrays. Linear array technology was first demonstrated in PbS, PbSe, and InSb detectors. Photovoltaic (PV) detector development began with the availability of single crystal InSb material.
In the late 1960’s and early 1970’s, “first generation” linear arrays of intrinsic MCT photoconductive detectors were developed. These allowed LWIR forward looking imaging radiometer (FLIR) systems to operate at 80K with a single stage cryoengine, making them much more compact, lighter, and significantly lower in power consumption.
The 1970’s witnessed a mushrooming of IR applications combined with the start of high volume production of first generation sensor systems using linear arrays.
At the same time, other significant detector technology developments were taking place. Silicon technology spawned novel platinum silicide (PtSi) detector devices which have become standard commercial products for a variety of MWIR high resolution applications.