Archive for the ‘Science’ Category

Lorikeet visit

Sunday, 9 January, 2011

Rainbow LorikeetLorikeet feeding timeLook who visited my balcony this morning. There were three rainbow lorikeets hanging around. I tried offering them sunflower seeds, but they didn’t like them much. Only when I checked later did I realise they are nectar and fruit eaters. Ooops.

I might look into getting a nectar feeder to hang out there.


Sunday, 2 January, 2011

I was wondering when something like this would happen. A microbiologist at the University of British Columbia has started ScienceLeaks – a place to collect links to peer-reviewed science papers that have been “liberated” from behind journal pay-walls.

I’ve long thought that the scientific literature should be free to everyone to access. Science needs to move to a new publishing model in which this is possible.

I’m a little concerned at what something like this might do to peer review in the short term – mostly because I’m not prescient and can’t foresee all the factors involved and how they will play out. Journals currently charge for access to papers because they need that money to support the infrastructure to arrange the traditional anonymous peer review system for every paper that gets submitted. Take that revenue away, and something else needs to happen.

It may be possible for science to survive by people posting papers to free sites and having anyone (or any accredited user) post reviews of it, voting them up and down. But this could easily lead to favouritism or downright chaos, neither of which is desirable for science publication.

However, I think science needs to move to a free availability model, and soon. The number of scientifically literate and interested people who want to see what our researchers are doing is growing, and hiding the best science behind pay-walls makes it look like there’s something to hide, breeding conspiracy theories and anti-science. My main criticism of this ScienceLeaks site is that it looks too small and doesn’t go far enough. I think it won’t be long before we see a science leaking site on a massive scale, aiming to publish every science paper free of charge. The revolution is coming. I hope the journals are thinking about this and have a plan for it, otherwise they’re going to be caught with their pants down and science could suffer an upheaval before things settle down into a new paradigm.

Glass flows

Wednesday, 22 December, 2010

We’ve all heard the story that glass is a supercooled liquid that flows slowly. The “evidence” often cited is that centuries-old cathedral window panes are thicker at the bottom, so obviously the glass must have flowed downwards under gravity. But the real explanation is that panes of glass weren’t actually made uniformly thick in those bygone days, and was almost always mounted thick side down. Wikipedia says so, and there are plenty of other pages on physics blogs and sites giving the same debunking of this pervasive urban myth.

It’s a myth, right? Right?

Maurizio Vannoni, Andrea Sordini, and Giuseppe Molesini of the CNR-Istituto Nazionale do Ottica in Florence have published an article in Optics Express: Long-term deformation at room temperature observed in fused silica. You need to pay to see the full article, but in summary, they have interferometrically measured the flatness of fused silica optical flats over a period of 10 years, as a routine part of their optical calibration work. (Fused silica is essentially an ultra-pure optical glass.)

The flats are circular, and stored in a clean room under controlled temperature (19-21°C) and humidity (40-50%). They are stored horizontally, supported by circular mounts. And, over 10 years, they have sagged measurably in the middle. The sagging is of the order of a nanometre, is greatest in the centre, and is least at the three points of the mount where the silica is clamped with elastomer pads, which supports the hypothesis that the sagging has been caused by gravity.

Now, a nanometre is not much. You’d have to let the silica sag for roughly 10 million years before the deformation was the order of a millimetre and therefore easily detectable with an unaided eye. It’s certainly not enough to account for the medieval windows, by a factor of hundreds of thousands or more. And the researchers are slightly hesitant to declare that this is a case of the silica “flowing” under gravity – they propose other possible explanations, but admit the gravity flow one seems most likely. They conclude that the calculated viscosity of the silica – assuming it has flowed under gravity – is some 23 orders of magnitude less viscous (i.e. more flowy) than previously quoted and assumed by the optical community.

This is not even enough of an effect to materially affect, say, the working life of a space telescope mirror or something like that. For all non-ultra-ultra-ultra-high-precision-optical purposes, it’s still safe to say glass doesn’t flow. But, as is often the case in the real world, things are never perfect!

Subway Maths

Tuesday, 2 November, 2010

I just had a Subway sandwich, and I notice on the packaging the following information:

Subway sandwiches with 6 grams of fat or less include:

  • Ham (4.1g fat Aus; 4.5g fat NZ)
  • Veggie Delight(3.1g fat Aus; 3.1g fat NZ)
  • Turkey & Ham (4.9g fat Aus; 4.9g fat NZ)
  • Turkey (5.1g fat Aus; 4.9g fat NZ)
  • (some others)

Prepared according to standard recipes […]

This presents some interesting observations. Firstly, notice that the amounts of fat are different for Australia and New Zealand. I’m guessing this is because here in Australia Subway uses leaner ham than in NZ, while conversely we get fattier turkey than in NZ.

Secondly, the veggie sub, which has negligible fat content in the filling (essentially just lettuce, tomato, onion, carrot, and cucumber – the “6 grams of fat or less” subs are defined to have “no cheese or condiments”) must be getting all of its fat content from the bread. The fact that this is the same in both countries is mildly reassuring.

Thirdly, the combined Australian ham & turkey sub contains a fat content that is, understandably, between the fat content of the ham or the turkey alone – assuming it contains a fraction of the ham plus a fraction of the turkey of the individual subs, not just both lots added together. The NZ ham & turkey is a bit more puzzling, since it contains the same amount of fat as a turkey sub alone! Yet we can see the turkey is fattier than the ham, so if we remove some turkey and add an equal amount of ham, the total fat content should drop.

Fourthly, the “prepared according to standard recipes” implies that the same proportions of ingredients are used in both countries to make the same sandwiches. (This may not actually be the case, but run with me here.)

Well, given these data, let’s see what we can deduce. Let’s call the amount of ham in a ham sub a “serve of ham”, and the amount of turkey in a turkey sub a “serve of turkey”. Then in Australia a serve of ham contains 1 gram of fat and a serve of turkey contains 2 grams of fat, while in NZ they contain 1.4 and 1.8 grams of fat respectively.

Now let h be the number of serves of ham in a ham & turkey sub and t be the number of serves of turkey. Then, subtracting the 3.1 grams of fat from the bread (the same for each sandwich), we have:

  • In Australia: 1h + 2t = 1.8
  • In New Zealand: 1.4h + 1.8t = 1.8

Plugging this into Mathematica (hey, I could solve it by hand if I wanted, but I have Mathematica sitting right here, so why not use it?) gives:

  • h = 0.36
  • t = 0.72

So, this means:

  • A ham & turkey sub contains exactly twice as much turkey as ham!
  • The total amount of meat in a ham and turkey sub is 1.08 serves!

But wait, there’s more! That 1.08 serves is made up of 0.36 serves of ham and 0.72 serves of turkey. If the serves are the same size, then you get more meat in a ham & turkey sub than in either a ham or a turkey sub alone. But let’s say the size of a serve of ham is H and the size of a serve of turkey is T. Then setting the size of a standard serve of “ham & turkey” to be the same size as a serve of turkey, we get:

  • 0.36H + 0.72T = T

Solving for T in terms of H, this gives:

  • T = 9H/7

In other words, if a serve of turkey is more than 9/7 the size of a serve of ham, then a ham & turkey sub gives you less meat than a turkey sub. But if a serve if turkey is less than 9/7 the size of a serve of ham, then a ham & turkey sub gives you more meat than either a ham sub or a turkey sub alone. A sensible assumption is that Subway is likely not going to give you less meat on a ham & turkey sub than on a turkey sub, so we can be pretty sure that a turkey sub contains no more than 9/7 the amount of meat of a ham sub.

This then allows us to calculate that, in Australia, turkey is at least (2/1)/(9/7) = 1.56 times as fatty as ham, while in New Zealand turkey is at least (1.8/1.4)/(9/7) = 1 times… at least as exactly fatty as ham!

What this in turn says about turkey and pig farming practices in Australia and New Zealand is left as an exercise for the reader.


Thursday, 10 June, 2010

Someone at work was talking about the 2012 total solar eclipse, which will be visible from Cairns in northern Queensland. Of course eclipse enthusiasts are already planning their trips, and Cairns has a tourism site devoted to it. I said it would be cool to go see it, but organising a trip is such a hassle, especially when the place will be crammed with thousands of people going there at exactly the same time for the same reason. (In fact, we’ll see a roughly 65% partial eclipse from here in Sydney, so that’ll be moderately cool.)

Then someone said we could just wait until 2028

The total solar eclipse of 22 July, 2028 will be visible from Sydney. Not just as a partial eclipse either – the path of totality crosses the city. And not just scrapes across an edge of the city – the path of totality completely covers the entire city. Check NASA’s page which has a Google map with the path of totality marked on it, and zoom into Sydney. Yeah, keep zooming in.

Everywhere from Wyong to Wollongong will see totality. The centre of the path of totality passes less than 5km from my house! It passes within a few hundred metres of the astronomy department of the University of Sydney, where I studied astronomy! It passes right over one of my favourite restaurants!

I’m sure I’ve heard of this eclipse before, but previously it was always “too far in the future” to really think about, and I’d never had such a detailed look at the eclipse track before. I’m stunned. Only 18 years before the world will fall all over itself to come to Sydney and the entire city will experience over 4 minutes of eclipse totality. Suddenly it doesn’t seem all that far away.

I can’t wait.

Trusting in Science

Thursday, 8 April, 2010

Why do so many people distrust science, scientists, and informed scientific consensus so much?

Maybe it’s old news, but I had an insight into this when looking at some stuff about Riedel wine glasses. (This is not a wine post, really.) These glasses are marketed as “scientifically” designed to maximise the experience and enjoyment of drinking a glass of wine. What’s more, they have dozens of different glass shapes, each “designed” to work best for some particular type of wine. The upshot is – if you believe this – that to enjoy your wine to the maximum you need to buy about 8 different sets of Riedel glassware.

Many people can spot the conflict of interest here. Obviously it’s to Riedel’s advantage if it’s true that to best enjoy your cabernet sauvignon you need a different glass to the one you drink merlot from. So if they say it’s true, then even the mildly cynical can easily come to the conclusion that they’re just making it up.

And what about those wrinkle creams? You know the ones, that are advertised as “scientifically proven to reduce wrinkles by 78%”. How do you even measure that wrinkles have been reduced by 78%? Does anyone really believe that?

The culprit here is advertising. Advertisers like to use “science” to promote their products, because it has a veneer of authenticity that gets some people to trust their products. But most of us have become habituated to “scientific” claims by advertisers and just mentally filter them out or assign a low weight to them and evaluate the products on our own criteria. Science has become something that you can choose to believe if you want – and maybe you’re gullible if you believe it.

Unfortunately, that’s a misguided representation of science. When hundreds or thousands of experienced scientists agree that something is most probably true because of all the research, data collection, analysis, and peer review that they’ve put in, it’s not the same as a claim on a commercial. It actually has serious weight behind it, and you better take on board the idea that what they’re saying is more likely to be true than not. Yes, there are counterexamples, but they are few in a vast edifice of consistent, established scientific knowledge. The odds of any given piece of scientific consensus turning out to be incorrect are very small indeed.

The problem is, large swathes of laypeople who don’t fully understand how science operates simply look on it as another marketing move. They feel free to be cynical, and to completely disregard what the scientific consensus says. Especially if they don’t like what the message is, or it makes them uncomfortable in some way.

Science is about uncovering the truth, not about concocting stories designed to sell a product. Stories can be made palatable. The truth is different; it doesn’t always fit the way we want the world to work. Disbelieving it won’t make you immune from it. Science has checks and balances to make sure that mistakes or lies don’t get propagated. That’s why it’s such a huge scandal whenever a scientist is found to have falsified data or lied about a research result. This is the absolute capital sin of science, and when it is discovered it is treated accordingly. Careers in science can be ruined by one instance. You can be pretty sure that the vast majority of scientists out there are keeping their noses clean, and when they say they have research to support some conclusion, that they really do have solid data behind it.

Advertising is a completely different beast. Judging science by the standards you use to judge advertising is simplistic and misguided. But it’s a trap that more and more people seem to be falling into, alas.

Digital post

Thursday, 25 March, 2010

The Oncoming Storm
What goes into making a photo? The simplistic answer is that you just record the light that enters the camera.

The problem with that is that a camera responds to light in a very different way to how the human eye and the human brain respond to light. If you pump the same number of photons of a certain wavelength into a camera at a certain position, they will hit the same pixel on the sensor (or the same position on a piece of film, if you’re old school) and be recorded in the same way. Give or take some random noise which is mostly insignificant. But if you pump the same number of photons of a certain wavelength into an eyeball, the human brain will process that signal very differently, depending on what else is around it in the image, how well adapted the eye is to the present illumination level, the presence of strong light sources or colours elsewhere in the visual field, and so on.

This causes the common phenomenon of seeing something spectacular – a sunset is a good and common example – and taking dozens of photos of it because, well, it’s just so amazing! But then when you look at the photos later, they’re all kind of blah. They have a dark, almost black foreground, and a washed out sky and the colours aren’t nearly as vivid as you remember seeing. The camera isn’t lying – it just records what the light was actually doing. It’s your brain that that was lying at the time your eyes were seeing the sunset. The human visual system is wonderfully adaptive. It can make out details in extremely high contrast scenes that current cameras struggle, or fail, to deal with. That’s the first problem.

The second problem is that the physical objects we use to reproduce photos – prints or display screens – don’t have anywhere near the brightness contrast or the range of colours that humans can actually perceive. There are colours that you can see in real life that cannot be generated by a consumer-level display screen. The result of these two problems is that photos straight off a camera sensor often bear only a superficial relation to the contrasts and colours we saw when we took the photo.

This problem is addressed by post-processing. This is not a new thing associated with digital photos. The old masters of film photography knew this, and used darkroom techniques to produce prints of images that were based on what was recorded on the negative, but were modified to give a better representation of how their eye remembered the image in the field. Dodging and burning (which some of you may be familiar with in digital image processing applications) began as darkroom techniques to alter contrast levels locally in a photograph. Ansel Adams, who created some of the most memorable black and white film images in the history of photography, used these techniques extensively. His photos were so striking and memorable and lifelike because he manipulated the data on the negatives to produce a print that the eye would recognise as close to what it would see in reality, rather than within the limited range of photographic film.

And the same principle applies to digital photos. The JPEGs you get out of digital cameras are processed to adjust the contrast levels and colour saturation so that when you display the image it looks roughly how it looked in reality. This is done automatically for the most part, with most people blissfully unaware. It’s only if you examine the raw image data off the sensor that you notice how different it is to what the scene should look like. And if you are an advanced level digital photographer and manipulate raw images and process them yourself to produce nice-looking results, you know this, and that some judicious tweaking can produce much more pleasing photos.

The point of this is that digital post-processing is often seen as “cheating” somehow, making the photo into something it never was. It can be that, certainly. But frequently some post-processing is needed simply to make a photo as recorded more closely match what we saw with our eyes when we decided to take the shot.

Oncoming Storm: original
When I saw this photo (right) in my collection after a trip to the beach, my first thought was, “Bleah, how dull. Why did I even take that shot?” But I loaded it up into Photoshop and played around a bit. I’m not claiming the shot at the top of this post is a perfect representation of what I saw with my eyes (being displayed on a screen, it never can be), but it’s definitely a closer match to what my brain told me I was looking at when I decided to take the photo.

I’m sure some people will claim they prefer the “unprocessed” version, saying the processed one looks “too fake”. Fine. I think it better represents what I saw that day. It is perhaps a little more enhanced for dramatic effect, but that’s also part of what makes photography an art form, rather than just a mechanical process. I can make a decision on how to present the photo, knowing that no way that I can present it actually matches the experience of being there.

The point here is that digital post-processing of photos shouldn’t be looked down upon as “messing with reality”. The image as recorded in the camera is already “messed with”. What you can do is take that data and turn it into something you want to look at and that reminds you of what you saw – in your mind – when you took it. And isn’t that what photography is about?

Ripe, Fruity, with a Hint of Carbon-14

Tuesday, 23 March, 2010

A story combining wine and nuclear physics… How could I not mention it here?

A group from the University of Adelaide have examined the carbon-14 content of Barossa Valley wines of authenticated vintages ranging from 1958 to 1997. They find a significant correlation between the vintage and the carbon-14 count, strong enough to allow them to date an unknown vintage correctly in a blind test to within a year.

The C-14 levels vary over the timespan tested because of relic atmospheric radiation from open-air atomic testing in the post-WWII years. The proportion of C-14 gets absorbed by the grapes and ends up in the wine. This “bomb pulse” dating technique has been known for some time, but it’s the first time it’s been applied to dating wine vintages.

Lest this be considered a trivial application of science, remember that top end vintage wines are big business. There is concern over forgery or adulteration of expensive wines, and this technique can be used on very small samples to either verify a wine’s vintage or detect tampering. Science to the rescue!

A reference to the original research publication can be found here. Apparently it was published in 2004, so I don’t know why the SMH decided to pick this up and run it as a story today.


Tuesday, 16 March, 2010

In many fields, there is certain level of abstraction you need to do with your knowledge in order to apply it at a more advanced level.

As an example, consider computer programming. The most basic level is manipulating variables and control flow with branches and loops. A level of abstraction comes when you write callable functions to encapsulate repeated tasks. There’s another level when you learn about pointers and references. And then you can fling around references to functions and pass those in as parameters to other functions. Further abstraction comes with function templating, design patterns, and so on. And at some point you get to wrangling large chunks of code that are standardised enough that you can write other code to generate those chunks of code from some sort of code definition files.

When you’re writing computer code whose purpose is not to calculate some value, but to generate other computer code, then you’ve climbed a fair way up the abstraction pyramid.

I used to have a job cutting code. I’m a competent programmer. But when the people around me started writing code to parse XML files into other, more complex code, I began feeling out of my depth. It was a level of abstraction too far for me to comfortably work with. I understand the concept of code generation, and can see the benefits, but actually doing it requires mental gymnastics that don’t quite come easily enough for me.

I’ve found I have similar trouble with more advanced mathematics. I’m fine with stuff that I can link directly to practical applications, like vectors and calculus to give simple examples. But these days I get thrown all sorts of matrix algebra and graph theory and classification trees and stuff which seems one layer too far removed from reality for me to fully comprehend. At some point along the way, I reached my abstraction threshold, and everything beyond that just seems like symbol pushing, with no underlying meaning.

I’m beginning to think that my strengths in the mathematical sciences lie not in the greater realms of abstraction, but in the solid application of what I know to the real world. I’m a visual person. I understand Fourier transforms and quantum mechanics and differential equations in an intuitive way, by thinking of them in terms of how they are represented by physical systems, and feeding that back in to figure out how the mathematics must behave. I don’t work from the mathematical equation manipulating and then map that on to the physical system.

Sometimes this seems like a limitation. Other people clearly have higher abstraction limits than I do, and are comfortable applying matrix operators in a purely mathematical way when they’re three or four steps removed from representing something, while I ask questions in their presentations about what it actually means. But maybe the lower abstraction threshold allows me to make deeper connections to describing physical systems, since many people have commented on my ability to describe complex scientific principles in terms that make them readily comprehensible. To me that just seems natural, as that’s the way I understand them. I need to see all those deep connections before I feel I really understand something.

Maybe that’s why I feel uncomfortable with higher abstraction. The connections are more tenuous, or fewer, and I feel like I’m working without a safety net, a reality anchor. In my heart I feel that the strong connections must be there, but they feel elusive, ghostly – and so I don’t feel that I fully understand what’s going on.

I don’t have a snappy conclusion to this line of thought. I only really thought about it last week, and I’m still digesting it and trying to see if it helps me. I think it might be the reason I have a breadth-first approach to knowledge. Any one field becomes more abstract as you learn more about it. If my abstraction threshold is lower than average (for science/research-minded people), it could explain why I diverge into looking at something different before I have an “expert” knowledge of any one subject.


Thursday, 11 March, 2010

Someone asked me today why the aperture number on a camera lens gets bigger as the aperture size gets smaller. Some of you no doubt already know why. But for anyone who’s ever wondered the same thing (as I did for many years when I first started using an SLR camera back in the days of film), let me explain.

The aperture is the number you see written as f/2.8 or f/8 or f/22. It describes the size of the opening inside the lens through which the light passes. There’s an iris diaphragm which can open and close to let in different amounts of light. This is useful to control for two reasons:

  1. The wider the aperture, the more light you let in to expose your film or digital camera sensor. So in dim light, it’s often better to use a wider aperture. Conversely, in bright sunlight, you can use a narrower aperture to get the same exposure at the same shutter speed.
  2. The wider your aperture the narrower your depth of field. This is a measure of the range of distances from your camera within which objects will be in focus. If your depth of field is large, lots of stuff will tend to be in focus, while if it’s narrow, only objects a precise distance from the camera will be in focus and everything else will be blurry. This might sound bad, but in many cases you want a shallow depth of field, such as to make a flattering portrait of someone – it looks better if things in the background are blurry so as not to distract your eye from the subject of the photo. So a portrait photographer will tend to use a wide aperture. On the other hand, a large depth of field is good for landscape photography, where you want everything in focus, so a landscape photographer would tend to use a narrow aperture.

The interesting thing is that to a beginner in photography the numbers of the apertures might seem to be backwards. f/2.8 is a wide aperture, letting in a lot of light, while f/22 is a narrow one, letting in relatively little light. Why is this?

The answer lies in the mysterious “f/” that precedes the aperture number. Although people usually refer to the apertures as “eff two point eight” or “eff twenty-two”, the slash symbol is actually a division sign. The f is the symbol for the focal length of the lens. If you have a standard 50mm lens, the aperture f/2.8 is 50/2.8 = 17.9mm wide. And the aperture f/22 is 50/22 = 2.3mm wide. So you see f/2.8 is quite a bit wider than f/22.

The interesting thing is that the apertures are defined in terms of the focal length of the lens. If you have a 200mm telephoto lens, then f/2.8 is 200/2.8 = 71mm wide and f/22 is 200/22 = 9.1mm wide. So in a physical sense the “same” aperture numbers are actually physically bigger on a longer lens, and physically smaller on a shorter lens.

So you might expect f/2.8 on a 200mm lens to let in more light than f/2.8 on a 50mm lens. But this isn’t the case. The 200mm lens has a field of view 4 times smaller than the 50mm lens – in other words it magnifies things by 4 times compared to the 50mm lens. After all, this is why you use a telephoto lens, to make things look bigger and closer! But the field of view being 4 times smaller means that the lens is gathering 16 times less light (it’s 4 times smaller in the horizontal direction and 4 times smaller in the vertical direction, so it sees an area 16 times smaller). But then the aperture f/2.8 on the 200mm lens is 4 times bigger than the aperture f/2.8 on the 50mm lens, so it gathers 16 times as much light (again, 4 times bigger horizontally, multiplied by 4 times bigger vertically = 16 times the area). So the physically larger aperture exactly cancels the fact that the lens is only seeing a smaller area of the image. The result is that aperture f/2.8 on a given lens gathers exactly the same amount of light as aperture f/2.8 on any other lens! (Assuming an evenly lit subject.)

So that’s why lens apertures are specified in this way. Rather than say the aperture is 5mm, or 13mm, or whatever, it’s much more convenient for figuring your exposure to express the aperture as a fraction of the focal length of the lens. Which explains the odd-looking “f/” notation, and why the numbers get bigger as the aperture gets smaller.