Month: January 2015

Coupled electrical resonators and greetings from the lab!

All us lab rats have been quite busy lately with various exciting experiments. Here is a tiny bit of Guowei’s experiment, which hopefully in its second version would lead to some very significant results 🙂

A live demo of the so called Q-factor.

Advertisements

The cost of knowledge?

Not very related to the scope of our blog, but still… Why do companies keep the knowledge to themselves and try not to spread it out as much as possible? Well, one might argue that otherwise they would lose the game versus the rest of the competitors in the jungle, in which they might be right. We do not live in an ideal world, I know, but I imagine, one day, a world without patents, without proprietary technology and even physical phenomena. How can combinations of a few physical phenomena be proprietary at all, e.g. the case with some military applications we know from the past (hint. 1945)?

Unfortunately universities can not keep up with developing new technologies and revealing what industry discovers and implements daily, which, apart from teaching is probably their second role of most importance to the society. Just imagine a world where all universities and companies share their scientific discoveries?! The scientific competition, I imagine, would be much much higher, yes it would be tougher to compete, but in such a world the scientific advancements would rush-up at a higher pace too. Too many groups are re-inventing the wheel nowadays due to either “proprietarism” or “non-bother-to-publisharism”, if we can learn from each other’s mistakes, I imagine the function of development progress would shift from linear towards… still linear but with a higher slope dy/dx.

Back to why I am writing this, there is a conference in CMOS Image Sensor technology occurring yearly, which attracts a lot of major image sensor design companies (the people who hold most of the knowledge in our vision sensor field). My point and anger? (not really?) is that this is a workshop oriented towards the very same design groups organizing or helping out the organizers, so knowledge (if one can speak about knowledge in the case of this conference) is somehow proprietary again. In such a place one goes to sniff around for opportunities to meet “highly ranked” (as the ad says) people and try to squeeze out as much as possible for what they are trying to do during the late evening dinner while all “major” attendees are drunk. Possibly the speakers also get another “free” day off which they may quite enjoy and in general giving such talks is indeed an interesting task. It is great that such conferences exist, meeting people from your own field is definitely a pleasure, but at such price this makes it a vicious circle and we again end-up having industrial-only attendees. Here is a quick reference to what level of craziness I am referring to:

IS2015 costs per attendee

IS2015 costs per attendee

Why not be a bit more open when it comes to knowledge? What do you think?

Signals, transistors and music

Hej hopp! Not long ago (well, technically last year) I was helping out a fellow in the lab next door with his RF LNA simulations. All of a sudden our afternoon turned into a discussion about various guitar distortion effect pedals and their circuit implementations. Today I want to show you one of the simplest possible active circuits for distorting a signal.

So what do we call distortion? Distortion is the deviation of an output signal waveform from some sort of reference signal. Said in such a way one might argue that an amplified signal is also a distortion of sorts, however when we speak about distortion we normally refer to the time-domain (wave shape bending) and not simply an amplitude scale-up.

In mathematics and engineering the sine wave is the simplest possible waveform in a sense that one can generate any other type of waveforms by adding-up multiple (infinite?) number of sine waves. In practice we can not add an infinite number of sine waves together and thus we are limited to what we can generate. Out of the scope of this topic, have a look at the Gibbs phenomenon if you want to escape the ideal world of mathematicians and are interested in practical waveform generation, yet somewhat involving maths again. In music, normally there aren’t such instruments that produce pure single sine wave tones (bar some tuning forks at e.g. 440 and 880Hz) but for the ease of explanation the rest of my writing would be using pure sine wave signals as a reference.

Back to circuits, here is the simplest possible electronic distorter circuit I can currently think of:

A half wave rectifier.

A half wave rectifier.

This circuit removes a part of the fundamental frequency (half wave pass-through) but introduces a more noticeable proportional octave frequencies which might often be very desirable.

Moving into an active element solution, a d-effect of most electronic amplifiers can be utilized for achieving desirable distortion. The phenomenon is often called clipping. Here is an illustration:

Sine wave clipping

Sine wave clipping

To give a better overview and not waste much time I decided to hook-up a simulink model and fetch some plots. However after some frustration with this amateur tool, I decided to go back to my dear good old friend Virtuoso and hook-up a common source amplifier at a 180nm process node. Here is the common source amplifier testbench I used for my examples.

An NMOS common source amplifier.

An NMOS common source amplifier.

An input-output transfer function, input bias voltage DC sweep.

An input-output transfer function, input bias voltage DC sweep.

First I did a DC sweep to find the suitable operating point. Have a look at the amplifier’s transfer function (right), even though that we often assume that at a certain region it is linear, we can still see that it actually isn’t (green curve), not with this 180nm CMOS process at least. I have put a fairly high impedance load of 1MOhm as to increase the gain of this circuit which is approximately equal to A_{0} = g_{m} \times R_{L}. So, even if it is properly (in the mid output range) biased at ~550mV and applying a 1kHz sine wave with a reasonable swing so we don’t overdrive the amplifier one can still observe some minimal distortion. On the left side the input (red) ideal sine wave and output (green) amplified sine wave can be seen. To the right is an FFT spectrum plot of the amplified (green) sine wave. You can also see some FFT spectral leakage as I was too lazy to set-up a Fourier transform with coherent sampling.

A fairly low-distortion output sinewave.

A fairly low-distortion output sine wave.

However, what happens when we overdrive the amplifier? As we have a limited by the power supply output swing the output starts to saturate and thus we start to clip the sine wave. The higher the input drive swing the higher the clipping and distortion at the ouput. One can observe on the FFT that the fundamental tone power of 1KHz is now re-distributed at a number of octaves.

A higher overdrive.

A higher overdrive.

Heavy distortion, note that the second octave is about 1/3 of the fundamental tone power:

A heavy distortion

Very high overdrive, a heavy distortion

Seeing all these pictures we can conclude that the various combinations of harmonics give us all these different (some pleasurable, others not so) sound effects. Various amplifier/distorter circuits sound different and it is up to the musician/designer’s taste. Back in the old days of modern music (e.g. Pink Floyd et.al.) musicians were to some extent circuit designers/experimenters, tuning-up the perfect circuit implementation which suits their needs. In the past 20 years direct digital synthesis and signal processing has offered a number of benefits to musicians, however the discrete tunability nature of these devices provides a number of limitations when it comes to distortion effects. I was aiming to find a way to play my sine waves (this is why I initially approached simulink, as it has an audio sink function) however it is a somewhat lengthier operation and I am leaving it to your imagination.

It is fascinating to see that such a simple electronic phenomenon has dramatically added-up to the variety of music available today. I offer you an example of a very slight distortion in combination with a German flute and some digital delay and harmonization. Only 50 years ago one couldn’t even imagine such sounds. Hmm, what would music sound like by the time I reach retirement age?

 

Energy in a cup of coffee

First day in the lab after some lengthy Christmas holidays! Almost a little bit too much. A proper way of starting the day means sharing a cup of coffee with my fellow lab rats. I love morning espressos in my favorite cup, hijacked virtuously from my previous job by means of polite mooching, yippie. 🙂

A double espresso in a huge cup.

A double espresso in a huge cup.

Long time no espresso for me meant burning my tongue at the first sip, and hence this post. Can the “invested” thermal energy in a double espresso kill me if it was to be transformed into gamma rays? I offer you here some of my ultra-primitive (8th grader’s) random morning nonsense thoughts.

  • We can assume that coffee is water (oh what an irony) and therefore its specific heat would be 1.0 cal/g.degC i.e. same as water.
  •  Because something annoys me with the unit of calories, let’s convert it to something more useful as e.g. jouls:

1 cal = 4.184 J

  •  The volume of a double espresso equals the volume of two single espressos (WOW!) which therefore is 60 ml, let’s also assume that 60 ml of coffee weighs 60 g to make our life simpler.
  •  Tap water is 20 deg C and a properly extracted espresso should be around 75 deg C, thus we have a delta of 55 deg C to “invest” in it. The energy required to get this precious liquid will be:

55 \textdegree C \times 60 g \times 4,184 J = 13807,2 J

Gray (Gy) is unit measuring the absorbed by a body ionising radiation which is defined as:

1 Gy = 1 \frac{m^{2}}{s^{2}} = 1 \frac{J}{kg}

We can thus very easily convert the energy required for heating the coffee to an absorbed radiation dose which would be:

\frac{13807,2 J}{70 kg} = 187 Gy

I use 70kg here as that’s the closest to an average person’s weight. So how much is 197 Gy? According to this wikipedia table, and assuming a 100% absorption rate 197 Gy would pretty much kill me within 24 hours. This is of course provided that we have 100% absorption rate. Other more accurate absorption models exist however, I’ll skip this for the time when humanity discovers what exactly black holes are…

If all my simple thoughts above are correct this happens to be a fairly scary fact. Hmmmm, another less deadly question pops-up to my mind. Can the very same energy kill a microchip if it is in the form of a focused (0.1mm) laser beam with 600nm wavelength projected onto the surface of a silicon die with Al top metal layer thickness of 8000 Å and assuming no passivation layer has been deposited on top???

Greetings from Germany!

Hello, fellow co-writers and readers!

Desi here, just wanna say I’m still alive… And happy new year! May it be another jolly year of circuit building, signal processing and photon-electron action!

I greet you from Berlin with this fabulous display of electricity and light:

P1020670

As stated in our very first post, I am currently proving that I am not a robot by having a vacation of sorts. However, I will be back quite soon and I’ve even prepared a little surprise. Here in Germany I discovered the miracle of Conrad: a dream store for every tech geek out there. Today I prepared the wires from the tiny electronics kit I had bought. Just a teaser for what’s going to occupy my time and mind in the next couple of days…

OLYMPUS DIGITAL CAMERA

 

 

 

 

 

 

Of course, a blog post with my project will be published as well. Soon!

Crest factor and how is it useful?

What is a crest factor and how is it useful? Let’s start with a definition of the term. The crest factor is a measure of any type of time domain waveform giving us a ratio between the peak and average waveform values. It is very useful in a sense that it gives the person analyzing the data an overview of how much impacting occurs in a waveform. Very high drift peak values versus the average magnitude are often associated with wear and stress.

Not only it is useful in electronics where one can visualize the dynamics of a signal, but also in mechanical engineering or hydraulics. One can measure the periodicity of very high stresses, in e.g. two meshing gears or a hydraulic pump pressure fluctuation and therefore the risk of micro-cavities. Mechanical constructions coupled to very high crest factor vibrations can often cause material fatigue, all these processes can be indirectly connected to peaks in the time-domain waveform or in other words the crest factor.

Fourier transforms are used in every scientific field, signals with extremely high peak values transform into random noise on FFT diagrams which can often lead to confusion in e.g. mechanical vibration, electrical signal and other analyses. This is why a crest factor also gives us an instantaneous insight about the noise in the signals we are working with.

Now the latter statement “signals with extremely high peak values transform into random noise on FFT diagrams” can also be compared to the Delta Dirac pulse in the f-domain in a future post.

Psychoacoustics and the auditory masking phenomenon

Happy New Year dear fellow geeks!

Many new year celebrations around the world involve at some point listening to the national anthem, or any other background music. During new year’s eve I had an intriguing discussion with friends on music, and data compression. I have decided to take the handle and transfer a part of our discussion into this post.

We are constantly surrounded by sounds and it is our brain’s task to distinguish between various tones, take the important for us and filter out what we (it) decides to be useless. The first part of this post aims to give a brief overview the psychoacoustic phenomenon named “auditory masking”, while the second elaborates on the perceptual coding schemes used for lossy music data compression. In the next few lines you will not see any formulas or complicated modelling, since I want to focus on the principle and draw some conclusions.

We all know that the human ear can normally hear sounds with frequencies from 20 Hz up to around 20 kHz and the latter limit can vary with age. To illustrate a bit, here is how a “bode” plot of the human ear looks like:

An equal loudness human ear sensitivity plot.

An equal loudness human ear sensitivity plot.

The perceived frequency resolution can vary within the hearable range, however its finest region appears to be within 1-4 kHz, which also matches the pitch of most human speech. This property together with the “masking” phenomenon would help us understand what the human ear can not perceive and therefore can be excluded from the music information we store.

So what is auditory masking? Imagine being on a bus station, talking to a friend while suddenly a noisy bus arrives at the bus stop. You are then no longer able to hear your friend, but you can clearly see his lips moving and he is in fact producing audible speech. His speech has been masked by the noise from the bus engine. Here is an example, narrowed down to two / a few tones:

Auditory masking phenomenon example.

Auditory masking phenomenon example.

Beyond a certain amplitude (marked as mask threshold) we can no longer perceive the weaker tone of interest. This is known as the auditory masking phenomenon:

Auditory masking phenomenon, a more general case.

Auditory masking phenomenon, a more general case.

It is observed that depending on the frequency of the masker tones and tone of interest, the mask threshold varies and the further away the masker tones are from the main tone, the higher the mask threshold is.

Knowing this phenomenon associated with the human ear, back in 1995 Fraunhofer Institute for Integrated Circuits in Erlangen came-up with a clever idea known as perceptual coding. Perceptual coding is a lossy audio encoding scheme reducing quantization steps for sounds which are inaudible and contained in the music/sound data. The nowadays popular mp3 encoding scheme utilizes perceptual coding and the masking phenomenon for lossy audio compression.

Because I like drawing, here is a simple block diagram of the perceptual coding scheme:

A principle diagram of the perceptual coding scheme.

A principle diagram of the perceptual coding scheme.

The core of the encoder is the perceptual human ear model, based on the latter, a number of bandpass filter banks are tuned (depending on the ear’s critical bandwidth at various frequencies). We need these to be able to on a later step reduce specific information (bands) detail. The used frequency tone spacing (filters) determines the masking threshold.
The perceptual model steers a variable quantization and sampling rate control block. In practice, the quantization steps of an inaudible tone falling into the corresponding filter bank (band-pass filter) would be greatly reduced. Digital arithmetic number rounding or truncation is normally used for information reduction, however a variable sampling rate control can also be sometimes utilized. After information reduction, some additional noiseless encoding (e.g. look at huffman coding) is performed, before the final bit-stream packing.

Ideally, if the perceptual human ear model has infinite precision, such coding schemes should appear perfect/lossless. Unfortunately this is not the case in reality, but hey, nevertheless, mp3 has changed the world. Isn’t this an incredibly elegant invention of the 20th century? Here is my example for an application of this clever lossy compression scheme 🙂