As early as World War II, engineers were experimenting with digital audio, converting the analog waves of sound into discrete values. This was accomplished by "sampling" the sound wave many times per second, with each sample recording the amplitude of the wave at that point (including whether the wave was "up" or "down"). By the Nyquist Theorem, the sample rate (number of samples per second) must be at least twice as high as the highest recorded frequency to prevent weird artifacts in the recording.
So, in the 1970's, when Philips and Sony began looking for a way to improve audio quality for recorded music, they turned to digital sampling. A sample rate of 44,100 samples per second (44.1 kHz) was chosen because it exceeded the target sample rate of 40 kHz (twice the highest frequency humans can hear, 20 kHz) and because that's how much information could be stored on a video tape, the storage medium of choice until the little silver plastic discs we know as CDs were perfected.
Each "sample" is a 16-bit number, ranging from -32,768 to 32,767. This number indicates the amplitude of the wave at the instant of sampling. Thus a sampled wave oscillating back and forth from -32,768 to 32,767 would be the loudest wave this format could represent, a wave changing from -1 to 1 would be the quietest, and a bunch of zeroes in a row would indicate complete silence. This range of values for the amplitude is fairly fine-grained, which allows even subtle volume differences to be accurately represented. Sampling audio in this digital fashion is known as Pulse Code Modulation (PCM), and is the most popular method of digital sampling.
PCM digital audio produces quite an accurate picture of the "live" sound, and only the keenest listeners with good equipment can distinguish between it and the original.