The Technology Behind the 2020 Nobel Prize in Physics
Last month, the Nobel Prize in Physics for 2020 went to Roger Penrose for establishing that black hole formation is “a robust prediction of the general theory of relativity”, while Andrea Ghez and Reinhard Genzel shared it with him for their “discovery of a supermassive compact object at the centre of our galaxy.” While the theoretical issues behind the Nobel Prizes are discussed in the press, very little attention is given to the experimental and technical developments that made these discoveries possible.
In the late 18th and 19th centuries, John Mitchell and Pierre-Simon Laplace had already conjectured that since gravitational force was always an attractive force, very massive objects can collapse due to their own gravity. If the objects are massive enough, their entire mass can squeeze into a very compact object, whose surface-gravity would be so large that neither particles nor light can escape from their surface, which is what we call a black hole now-a-days. But twentieth-century physics showed that these Newtonian mechanics would breakdown in regions with high gravitational fields, and the corresponding physics should be decided in terms of the General Theory of Relativity or GTR, that is, in terms of distortions of space-time curvature in the presence of high gravitational fields.
Very soon after Einstein’s pioneering paper was published, Karl Schwarzschild showed, in 1916, that a black hole-like situation would arise at a point located at a distance r0 = (2GM/c2) from the centre of an object of mass M. (This equation is called the Schwarzshild radius.) So the Schwarzschild radii for the sun and the earth would be 3.0 km and 9 mm respectively, which are much smaller than their sizes. Could the radii of massive objects be equal to their Schwarzschild radii? Can such objects really exist? Or, would something else arrest their collapse into the black hole state?
After scepticism had reigned for decades, this question opened in 1955 in a pioneering paper by Amal Kumar Raychoudhuri (and later Arthur Komer) that gave the conditions under which such collapsing events can be reached in finite time. In 1965, Roger Penrose invoked the ideas of geometry in GTR. Penrose’s radical approach showed that collapse to a black hole is inevitable for massive objects, and that this cannot be arrested by other physical processes, such as rotation etc. It was rigorously shown that such a result was not inconsistent with GTR, but a result of it.
The question is, why did it take 50 years after Penrose had done the theoretical work to discover the existence of a real black hole? It is here that the role of technological advances—incremental in most cases—comes in. Looking back, there was a surge in radio astronomy during the 1950s. Though the technique was discovered by Karl Jansky in the early 1930s, it came of age after World War II. Development of the radar and radio science during the war brought the knowledge to radio astronomy and several radio telescopes were built at different sites. A lot of data arrived too, and this required that some theoretical issues of GTR be revisited.
For instance, the discovery of quasars—Penrose’s work came close on the heels of this discovery. Quasars are extremely luminous radio sources, located at distances far beyond our galaxy, the nearest one being 2.5 billion light years away. Subsequently, it was found that quasars emit in all possible wavelengths of the electromagnetic radiation and are present in different parts of the sky, but at the centres of galaxies. About the huge energy generated by quasars, Donald Lynden-Bell proposed, in 1969, that they have supermassive black holes at their centres, with several million solar masses of matter in them. This supermassive black hole pulls surrounding matter to itself by its enormous gravitational field. This in-fall of matter into the black hole creates several processes that give rise to emission of radiation, which radio telescopes were able to detect. Lynden-Bell had said, “We would be wrong to conclude that such objects in space-time should be unobservable.”
Since quasars were being “hosted” at the centres of different galaxies, the question naturally arose, should we also not look for a supermassive black hole at the galactic centre of the Milky Way?
Professors Reinhard Genzel and Andrea Ghez pursued this problem for about the last thirty years. In 1988, the present author met Ghez at a workshop in Cargese. She was then a fresh Ph.D. scholar, and presented the measurements of orbital sizes of some binaries, i.e. stars that orbit around each other. The search for a supermassive black hole would involve this: if there were a supermassive black hole at the galactic centre, then stars in its vicinity would orbit around it. If we succeed in finding their orbital radii and periods, then we would get a signature for the black hole.
They looked for such stars in the Sagittarius A* region, which is in the heart of the Milky Way. In this region, space-time distortions due to the supermassive black hole could give orbital periods of stars as low as 16 years, that is, well within a human lifetime, while the sun takes 200 million years to go round the galactic centre! But the diameter of their orbit would be 17 light-hours, and one had to look for it at the galactic centre, which is 26,000 light years away! The technological challenge, as explained by Genzel was this: it is like trying to resolve from the earth, the details in a few centimetres of the surface of the moon.
Scientists had to use telescopes of high light gathering capacity, i.e. of a high aperture diameter, since the intensity of starlight would be too feeble as it travelled 26,000 light-years, passing though layers and layers of absorbent material such as dust in the galaxy. Ghez’s team used the 10-metre diameter Keck Telescope in Hawaii, and Genzel’s group used the 8-metre aperture telescopes at the European Southern Observatory in Chile. Building such large telescopes are technological challenges in themselves. Their design and execution needs mechanical, electrical, civil, material-science and electronics expertise. Concerning other technological challenges, Genzel said, over these thirty years they had to improve the resolvability and sensitivity of their instruments by the several thousands to a few millions.
One technological barrier was to get sensitive infrared detectors, that is, CCDs, as observations would be made in the infrared zone. But there were many new ideas that were to be implemented. Importantly, instead of using a single telescope, light from multiple telescopes (for example, four 8-metre aperture telescopes at the ESO) were combined in a multi-telescope interferometer, thus improving the resolving power several-fold. Further, the scientists had to overcome the problem of “seeing”—which is the technical term for disturbances caused by atmospheric turbulence on imaging. As starlight simmers (as in the ‘twinkling’ of stars) a long exposure would wash out all those details that they wants to see. Thus, very short exposures of 1 to 10 milliseconds were taken, well within the time periods in which the atmosphere does not change.
To this, a new technique called “Adaptive Optics” was added. Its novelty also came with several challenges. Adaptive optics combines two steps: (1) wave front sensing and (2) wave front correction. In the first step, “atmospheric seeing” is monitored by creating an artificial star. A laser light with a wavelength of 548.2 nanometre is sent into the atmosphere from the telescope, which gets absorbed by sodium atoms in the mesosphere at a height of 90 km. Subsequently, these sodium atoms de-excite by re-emitting light. In the process, they produce glowing star-like point objects in the mesosphere. Blurred images of these artificial stars are now recorded through the telescope, showing the wave front distortions. This is “wave front sensing”, which monitors the “seeing” problem. For wave front correction, this “seeing” information is fed to deform a multi-segmented flexible mirror, which is a part of a multi-element imaging system.
This mirror is driven by several piezoelectric activators (essentially, tiny pistons) in such a way that the final image formed by the multi-element optics is stable and corrected for atmospheric “seeing” disturbances. The technological demands of the fabrication of the multi-segmented mirror, control and rapid computation for wave front sensing and correction, cannot be missed.
The signal is thus “cleaned” of noise by the two-step adaptive optics. These “clean” images could precisely measure the orbital sizes of the stars in the Sagittarius A* region and thus prove the existence of a superheavy black hole at the galactic centre with mass 4 million times the solar mass. This work, based on large team efforts has opened a new window for us to look at the universe through and removed the enigma of black holes as also the scepticism about their existence.
These achievements would motivate many an aspiring scientist and interest citizens in science as well. Thus, Andrea Ghez has said that this prize would make her “more passionate about the teaching side of the job…” and enhance “people's ability to question and their ability to think, which is crucial to the future of the world”. But these noble ideas need a multi-pronged social investment while society has to take the initiative for.
The author is a retired scientist from the Indian Institute of Astrophysics, Bengaluru and current president of the All India People’s Science Network. He is a science communicator, keenly interested in science-technology-society interactions.
Get the latest reports & analysis with people's perspective on Protests, movements & deep analytical videos, discussions of the current affairs in your Telegram app. Subscribe to NewsClick's Telegram channel & get Real-Time updates on stories, as they get published on our website.