The brightness of a star can tell us many things about its age, size and lifespan. But how are these measurements actually taken? Explore this photometry page to find exactly how!
Photometry is the measure of the brightness of astronomical objects. A standard result of photometry is the light curve (a plot of brightness vs. time). Magnitude is a logarithmic measure of the brightness of an object. The magnitude system is defined in such a way that the magnitudes of dim objects are large positive numbers, whereas bright objects have small positive numbers. For extremely bright objects, magnitudes can even be negative. Examples will be helpful: the brightest stars have magnitudes around zero or 1, and the dimmest stars visible in a dark area far from city lights have magnitudes around 6. The full moon has a magnitude of –11, and the sun is at –26. This system applies to visible, IR, and near UV light. The equation used by modern astronomers to define the difference in magnitude between two sources is:
`m_1 – m_2 = 2.5log(B_2 / B_1)`
where m1 and m2 are the magnitudes of the two sources, and B1 and B2 are their brightnesses (energy fluxes – see below). The zero-point for the system is the star Vega, which is defined to have a magnitude of zero at all wavebands (colors).
How to Determine the Brightness of an Object
Determining the brightness of an astronomical object using a CCD detector includes several steps. The basic ideas involved are illustrated in our Cookie Cutter Photometry exercise. In short, you must compare your star’s brightness to a reference star (or stars) of known brightness. It’s a bit like measuring the elevation of a point on Earth’s surface with reference to the standard “sea level.” Complete the Cookie Cutter activity to get a basic understanding of how this is done.
To see how to the brightness of stars is determined in a star cluster, click here!