Author: Deyan

http://transistorized.net

Happy birthday Mr. Electron!

Well… at least if we can refer to one of the latest tweets from Imperial College, J. J. Thomson announced the discovery of a particle with a miniscule size, negative in charge, and yet smaller than an atom. All during a regular Friday lecture at the Royal Institution on 30 April 1897!

So this means that Mr. Electron is hitting his 120s today! Hurray, electrons in my computer! I wish I could make it up to you with a cake, but I don’t really know where in spacetime you are right now.

Oddly enough, Thomson’s experiment provided an extremely accurate ratio between the charge and mass of the electron, even for today’s standards.

Greetings too, to all electronics engineers – our titles contain bits of two of the most influential 19th century discoveries! Could today be marked as the day of electronics?

Finally here’s how electron guns are made… Meh, don’t you think that people had a rather odd taste to magazine covers back in the day…

Sealing electron gun assembly – recklessly stolen from pulp librarian’s cover archives.

 

 

 

Advertisements

Nonlinear Dynamics and Chaos in Gas Discharge Tubes

I’ve been procrastinating heavily lately with high voltages and some noble gas tubes lying around. Seen below is some footage I took of a xenon flash tube connected to a high-voltage inverter salvaged from the LCD of a scrap laptop.

As the current provided by the DC-DC converter is by far not enough for a strong (and potentially blinding) discharge, this allowed me to stay still and look at the weak discharge for a long time. Eventually, I got to play a bit with the mini-lightning and noticed that the elecromagnetic fields induced into my body from the noisy environment are radiated back to the flashlamp which causes the discharge to flicker. These flickers reminded me of Chua’s circuit and the various fractal chaotic attractors.  What if one uses an ultra-high speed camera to capture the chaotic trajectory and potentially fractal behaviour of the discharge? Notice how the discharge has a few very stable modes and switches between them before it goes completely at random. Also note that even when in chaos mode, the arc still has a higher probability of following specific paths in space than others, which is another feature of Chua’s and Rössler’s circuits. That behaviour also strongly reminds of chaos diagrams as the ones below:

From order - to sub-order - to chaos

From order – to sub-order – to chaos, source: wikipedia

I dug an intriguing paper from 2015 on “Complex dynamics of a dc glow discharge tube: Experimental modeling and stability diagrams”. The group shows detailed statistical light intensity diagrams from a similar discharge occurring in a Neon sign glow tube. They have also derived the bifurcation diagrams of the glow discharge and have concluded that it somewhat resembles a Hénon chaotic attractor.

Although these kinds of studies have little practical applications, it is worth doing them just for the sake of the coolness of the project. Hmm, I have an old multimode HeNe gas laser tube lying around, maybe I could try hooking it up to a stronger-ish HV source, make it oscillate between modes, place a photodiode at the beam and start capturing intensities to reconstruct its fractal diagram. I am sure lots of experiments have been conducted on this, but still, it is cool and has merit for a potential geeky Christmas project.

Plasmonic filters in nature — a follow-up

Just a quick follow-up on my previous post depicting surface plasmons. I came across a well prepared pop-science video about butterflies under a scanning electron microscope. Notice how pouring isopropanol over the butterfy’s wings changes the wavelength they reflect. Destin describes the “losing of color” occurring due to light not being able to penetrate the nanoholes of the butterfly, which is partly true due to the reflections by the liquid medium. However, what also happens is that the isopropanol modulates the oscillation frequency of the free unbound electrons of the “material” of the butterfly’s nanohole wing, therefore reducing/modifying the coupling between the incident photons and interfering electrons. And doubtlessly, also all sorts of other second order “ref-lec-to-rac-tive” effects. Notice the difference between the brown/blue hole arrays and their diameter.

 

 

An idea about a quick investigation that comes to mind: what if one measures the energy of the reflected (filtered) light back and compares it with the energy coming from incident light for the very same filter bandwidth? How efficient are the butterflies’ nanohole arrays compared to man-made ones? Most likely the answer is not that straightforward, as man-made filters are designed for optimized transmission coefficient, while butterflies use nanohole arrays to reflect light to attract/protect themselves to/from other species. It may also be highly likely that there’s already tons of investigations conducted on the butterfly metamaterial topic.

One last thing that I came across some time ago. Similar nanohole patterns are observed in when anodizing aluminium and etching it consecutively with e.g. a fine ion etcher. Here’s a preview on the topic: A visible metamaterial fabricated by self-assembly method.

Plasmonic filters — the deus ex machina of the century

Today’s semiconductor news on Spectrum present an article about plasmonic color filters “Flexible and Colorful Electronic Paper Promises a New Look for eBooks” by Dexter Johnson. Coincidentally, last week I had a hot discussion with a few physicists on the same topic, so I thought I’d introduce this animal here and also state my personal engineering view on the topic.

Act one: introduction

In a very broad sense, surface plasmons are free electron oscillations occurring between a metal and a dielectric, which are excited by a light-metal-dielectric interaction. The unbonded to any atom oscillating electrons can be created by incident photons or electrons falling on the junction between the two materials. The frequency of their oscillations depend on the junction thickness itself, as well as the distance to neighboring pairs of oscillating electrons. When light strikes a plasmonic material, apart from exciting free electrons, it also couples to these (which actually form a kind of surface electromagnetic field), and thus creates a self-sustaining interference phenomenon. The key feature of this concept is that only photons with specific energy can couple with the oscillating electrons, while the rest pass though, hence, this process can be naturally used as a color filter.

Such metamaterials have been defined theoretically in the mid-50s but they only become very popular during the past ten years due to the rapid improvement in lithographic techniques. They could be used in applications ranging from display technology and image sensors, which is why it has just recently been approached by the big players in the chip fabrication field. Both of these technologies need some kind of light filtering element, and both of them now are using organic color filters to achieve their goals. The problem with organic color filters is that they are complicated to produce and quickly degrade with time, especially when UV and high temperatures are involved, as is the case with the die of an image sensor.

Act two: the complication

Plasmonic color filters are physically formed by a sandwitch made of metallic bread (usually Tungsten or any of the noble metals) and a dielectric butter (SiO2 or eqv…). This sandwitch is also accurately bitten by rats creating a superfine nanohole array which looks like this:

Basic structure of a generic plasmonic color filter

Basic structure of a generic plasmonic color filter

At first glance such a technology looks very silicon friendly as all we need to create our filter structures is the addition of two extra metal layers to the CMOS process, sounds like a piece of cake. But why did the big semiconductor players decide to abandon this scheme as soon as they had a sniff at its surface? Plasmonic color filters are still very experimental and I think we can not identify them as, even an immature technology yet. Although there are quite a few academic groups working on the problem, the prospects for production on a mass scale currently seem miraculous. But hey, I am very happy when I see progress on the topic folks! We’ve seen it many times, many advancements in history have been the outcome of scientific mambo-jumbo once labeled as absurd or strange.

It has been identified that plasmonic filter structures have an excellent bandpass quality factor dwarfing out even the best organic compound color structures ever reported. The Q-factor, however, is not the only element in the picture. The transmission coefficient of the best reported plasmonics is in the order of 0.2-0.3 which is very disappointing. The filter’s response is also not very steady — towards UV and deep UV full transmission is usually observed. Nevertheless, perhaps this could be solved with an extra glass UV filter. But still, can we not use them for accurate light spectrometers, where light is of abundance and low transmission coefficients are affordable?

Act three: climax

Well, here is where engineering comes into play and destroys everything, for the time being… Plasmonic filters currently rely on an extremely accurate lithographic process called electron beam lithography which, combined with dry plasma etching has an accuracy in the order of few nm. Apparently, even that is not enough to create a filter with a good yield. All reported plasmonic filters, including this morning’s popular science article in Spectrum, are manufactured on a scale of a few micrometers. I.e. as the authors of the paper suggest, just a pixel of the retina display of an iphone. Using e-beam lithography in mass production is a fairytale. Gold is a forbidden word in semiconductor fabs, so this material as a filter falls out as well. Solving optical crosstalk problems and alignment in adjacent filters for RGB arrays seems like another wonder to me. How about the fact that the microlenses deposited on top of the filters are polymer which is another dielectric and as the theory suggests — will cause a modulation of the surface plasmon resonance. Or the non-uniformity of the dielectric’s thickness? All these issues create a highly non-linear outcome. Engineers don’t like non-linearities… neither does large scale production.

Act four: resolution

The outcome of this drama is still very unforseeable and here I let you cast the die. One thing is certain — no matter the aftermath — a generation of scientists is gaining momentum along with the new era of metamaterial sciences. However, for the time being, whenever somebody starts talking about how plasmonics will change the world in a year time, I just smile and listen carefully.

Radiance and Luminance – the many funny units

In electronics, we typically have very well standardized measurement units, at least when it comes to units as Volts, Amperes, Watts and so on… However, I cannot say the same about the optics field. I have always been confused when having to deal with photometric units, but after a recent discussion with one of the camera gurus in our field, I got even more confused, which also gave the inspiration for this writing. So what is the difference between radiance and luminance and how obscure can the life of a camera designer get with such a cocktail of measurement units?

Radiance is a measure of radiometric power ꟷ it measures the rate of light energy flow. So far so good! It is (normally!) expressed in watts or joules/sec per unit area, usually steradian per squere meter.

Luminance is a measure of the power of visible light. But what is visible light? This is where things get confusing. Within Luminance, we can get (generally) two different flavours, possibly inspired by the fact that a human eye has two different photoreceptor cells:

rods ꟷ which are extremely sensitive, and can be triggered by a single photon. So at very low light levels, what we see is primarily due to the rod signal. Which also explains why colors cannot be seen at low light levels, a single photon does not carry color information.
cones ꟷ they require significantly brighter light (more photons) in order to produce a signal. And thus provide us with color information.
photosensitive gangleon cells ꟷ these were discovered recently and are responsible for our biological clock’s synchronization and I assimilate them with electronic comparators.

Okay the last paragraph drifted a bit, back to luminance, the two different flavours of it are:

  • Photopic flux is expressed in lumens and is weighted to match the responsivity of the human eye, which is mostly sensitive to yellow-green. But still? What is mostly sensitive to yellow-green? We are all different and there is no single sharp standard.
  • Scotopic flux is weighted to the sensitivity of the human eye in the dark adapted state, here as well, what is dark anyway?

Two derivatives of luminance and radiance are irradiance and illuminance accordingly. These are measures of the corresponding light flux per unit area at the receiver side. In other words, radiance is the energy radiated from the light source towards a unit area, and irradiance is the energy received without the light loss during its pathway. Typically expressed as W/m2/sr and lm/m2/sr. But there is more to it, there exist a dozen of other measurement units, and here is where you should get some popcorn and start reading or browsing around wikipeida:

Let’s start with the candela as I will have to refer everything to this unit which is part of the SI standard.

1 candela (new candela) is the intensity of a source that emits monochromatic light of frequency 540×1012 hertz and that has an intensity of 1683 watts per steradian. But why 1683 watts? The number 683 very much smells like a weak british horse’s power to me… Prior to 1948 the candela unit was not standartized and a number of countries used different values for luminous intensity, typically based on the brightness of the flame from a “standard candle”. Ha-ha-ha!

Then we start:

Nit ꟷ 1 cd/m2

Stlib ꟷ it is a unit of luminance for objects that are not self-luminous. Comes from the Greek word stilbein which means “flicker”. 1 stlib = 104 candelas per square meter

Apostlib ꟷ 3.14 apostlib = 1 cd/m2 – somebody thought that can neglect the rest of pi with an such an easy hand!?

Blondel ꟷ 1 blondel = 1/π .10−4 stlib – this unit is obviously reserved to blonde people

Lambert ꟷ 1 lambert = 1/π per candela/cm2

Scot ꟷ 1 scot = 1/10−3 .π candela/m2

Bril ꟷ 1 bril = 1/10−7 .π candela/m2

Foot-lambert ꟷ 1 foot-lambert = 1/π candela per square foot

Foot-candle ꟷ 1 foot-candle = lm/ft2

The image sensors field is baffled with all these units, some of which are still in use today. A possible explanation for why we have so many photometric units, compated to a single sharp one for electric current, is that we as humans naturally have light detectors, but not electric current ones (well, sort of). Each of us owning such receptors can create the perfect environment for speculation. This, combined with the vigorous victorian age pride is possibly the cause for the creation all those weird units. What do you think?

Lastly, here, if somebody asks you, this is one foot-candle!

One foot-candle

One foot-candle, courtesy of General Electric

 

 

ATLAS silicon strip detectors and charge sensing

Some time ago, scientists at the Large Hadron Collider (LHC) at CERN reported the potential discovery of a new fundamental particle, which does not fit anywhere in the standard model of physics. According to “the news”, the latest data from ATLAS and CMS (LHC’s two largest detectors) shows two unexpected data “bumps” from the usual gamma-ray flashes, which are correlated and acquired from two separate detectors. According to physicists, this may point to the existence of a particle that dwarfs by light-years even the recent discovery of gravitational waves.

It is not yet sure if this measurement data would get confirmed or rejected, but the latest news point that the significance of the results is fairly low, owning a sigma of 1.6 approximately. That fact inspired me to write a bit about the basics of the basics in silicon strip detector charge sensing, which is a stone age technology in commercial light sensing CMOS image sensors nowadays.

So what are strip detectors and how are they used? These are basically PN-junctions with an extremely wide aspect-ratio and, as their name suggests, look like strips. Here’s a sketch:

A bird's eye view of silicon strip detectors

A bird’s eye view of silicon strip detectors

These strips usually share an N-type substrate while each is P+ doped, covered by aluminium with some extra insulation layers in between. The LHC scientists are interested in observing interference patterns in X and gamma rays caused by the decay of the sought after particles. Apart of their intensity, what also interests them is the spatial trajectory of the high-energy rays. In order to detect the 2D-position of the gamma rays, they have invented a very clever strip array configuration. Let me explain, here’s another sketch:

Particle incidence angle detection using parallel strip configuration

Particle incidence angle detection using parallel strip configuration

A falling particle would have a higher probability of generating electron-hole pairs in the strip which is crossed by the X-ray photon, which already creates a kind of a 1-dimensional readout. To obtain the angular information, the adjacent strips could also be read-out and a particle correlation can be reconstructed. In other words, if the gamma ray happens to fall with some angle of e.g. 45 degrees, it will thus generate electron-hole pairs in two or three adjacent silicon strips. This gives us already almost 2D particle trajectory information. However, CERN engineers have decided to expand the technique even further, by adding another cross-pair of detectors underneath the upper set:

Hybrid X- and Y- direction parallel strip sensor configuration

Hybrid X- and Y- direction parallel strip sensor configuration

That way not only they can extract position and angular information in the x-direction, but also the y-direction, which, by using some post-processing provides accurate particle intensities and trajectories. But how can these PN silicon strips be read out?

The simplest method in reading out thousands of strips, is the use of an integrated charge amplifier and digitization electronics per each channel. Charge sensitive amplifiers have not been very “widely” used in the past with passive pixel CMOS image sensors, and have proven to be very suitable for single detector readout. These are still used in single-line CMOS line scan sensors due to their low-noise capabilities for low detector capacitance.

Typically, operational amplifier-based integrators using an integrating capacitor in the feedback are a commonly used scheme which is sketched below:
A basic charge amplifier topology for strip sensor readout

A basic charge amplifier topology for strip sensor readout

These amplifiers have high input impedance, they integrate weak charge pulses and convert them into voltage pulses for amplification and then buffer the output for readout from the next block in the chain. Because of that operation, this type of amplifier is called a “charge amplifier”. The first stage of a charge amplifier is usually a low-noise differential pair and its open-loop gain is set sufficiently high so that its amplification is not influenced by the detector capacitance which reduces the gain in the feedback. The output stage is a low-impedance buffer so it could drive the next circuits in the chain, typically an S/H stage of an ADC.

When particle decay rays strike the silicon strips, signal charge pulses Qs are generated, with an amplitude proportional to the particle energy. Due to this charge generation, the input potential of the charge amplifier lifts up and during the same time, a potential with reverse polarity appears at the output, due to the negative feedback amplifier. However, because the amplifier’s open-loop gain is sufficiently large, its output potential works through the feedback loop so that it causes the input terminal’s potential drop to zero, after some settling time dependent on the unity-gain bandwidth of the opamp itself. As a result, the signal charge pulses Qs are integrated to the feedback capacitance Cf and the output’s voltage changes according to the integrated charge. At that moment, since the feedback resistor Rf for DC is connected in parallel to the feedback capacitor Cf, the output voltage slowly discharges with the time constant determined by τ=Cf · Rf. The output voltage of such a charge amplifier scheme is dampened by the size of the feedback capacitor Cf, thus Qs and Cf must be chosen wisely to fulfill the specifically desired dynamic range. As a result it can be observed that the noise performance and dynamic range of this readout scheme is of highest trade-off. Increasing the dynamic range, leads to a lower swing on the capacitor and hence increases noise, the reverse is also applicable.

Note that the ATLAS detector has a total of over 200 m2 (square meters!!!) of pure detector strips! With a strip size of 0.01mm by 40cm we get a pretty decent number of about 50 000 strips and readout channels respectively. With such a huge set of sensors both ATLAS and CMS rely on the statistical significance of their measurements and the weird correlation in the slight gamma peaks, might truly be caused by a completely new fundamental particle. However, the readout complexity of such an enormous set of sensors is colossal, which makes induction of errors a plausible explanation as well.

Fingers crossed that all the sensing electronics works flawlessly and that all abnormal peaks detected are due to a newly detected particle.

 

Random processes in time and frequency – attempt for an infograpic

Some time ago, I got fascinated by a few infographics from absorptions. The dial-up modem poster deserves special attention!

I have been looking at some topics on noise recently and decided to attempt assembling my first intentional infographic. Here it is:

Random Processes in Time and Frequency

Random Processes in Time and Frequency

The background is a 24 hour full spectrum scan (waterfall chart, 6 MHz – 2.2 GHz) which I captured with my SDR dongle last summer up in the Balkan mountains. Here you can find another version.

It ended up being a bit messy and probably not very useful, neither for novices, nor for professionals in the field. Nevertheless, I have accumulated ideas for another infographic which should be very entertaining. Stay tuned for a follow-up post.