The ultimate size of such files is driven by their bitrate. That is, how many bits does the compressor (a.k.a. encoder) use to represent each second of audio. Actual, uncompressed, CD-quality uses 176,400 bytes or 1,411,200 bits to store each second. This is roughly 1411 kilobits per second, or 1411 kbps. Typical lossy formats would only use anywhere from 64 to 256 kbps to store the "same" information.
The problem is that bitrates only speak to the size of the file, not its quality. For example, one could write a compression format that achieves a 256 kbps bitrate by taking only the first 256,000 out of every 1,411,200 bits (18%) in any given second. Although some foolish people might assume a song encoded in this format would sound better than a typical 128 kbps mp3, any listening test would be able to easily prove the inferiority of such a technique.
The mp3 format, developed by Fraunhofer and Thomson, is a heavily-patented format and was ground-breaking in its time. Because it was the first widely-adopted lossy audio compression codec, people associate certain bitrates with certain levels of quality.
However, even within the aging mp3 format, and even within a single bitrate (say, 128 kbps), the sound quality of various encoders varies drastically. The Xing encoder is fast but produces poor-sounding files even at 128 kbps. The Lame encoder is a bit slower but produces markedly better-sounding files at the same bitrate.
Newer lossy audio compression codecs like WMA and Ogg Vorbis use different psychoacoustic models and noticeably improve sound quality at a given bitrate even over the best mp3 encoder.