Many mathematical algorithms are performed so routinely that we sometimes do not think about why we do them or why the method works. One example of such routine algorithm is reducing fractions to lowest terms. Why do we always perform this procedure? What is really happening when we reduce fraction to lowest terms?

One of the basic reasons why we reduce fraction to lowest terms is to lessen the burden of calculating large numbers. Of course, we would rather add or multiply and than and . So you see, the effort of multiplying the same fraction is lessen when they are reduced to lowest terms.

But what really happens when we reduce fraction to lowest terms? Why is it possible and why does it work?

Consider the fraction above. We can reduce it lowest terms by dividing both the numerator and denominator by . That is,

In another notation, we can write

.

Now, since we know from the properties of fractions that we can write the fraction

as .

Therefore, we can write the fraction above as

Now, we know that is just 1, so basically, we are just multiplying the fraction by 1 and thus not changing its value.

This can be generalized for , a fraction which can be reduced to lowest terms. This fraction can be written as

where is the greatest common factor of and (just like in . Therefore, this fraction can be again rewritten as

From here we can conclude that the lowest term of is really the same fraction represented differently.