Jump to content

What ATRAC SP version is on the RH1/M200?

Rate this topic


Recommended Posts

I know there are different version of ATRAC SP, but what version is on the RH1/M200? Is it the best one? (Type-R or 24-bit).

...A dead horse by any other name still doesn't smell sweet...

When you're talking about straight SP, it is the same Type-R codec as older models. The Type-S designation only improves LP codecs. Doing your own research (repeatedly posting the same question numerous times in different topics is not research) will benefit you in ways you cannot imagine.

Do you even know the significance of "24-bit" in this context? I'm sure that you saw it in some marketing drivel at some point and latched onto it. In reality though, PCM requires at least this much resolution--every HiMD unit at least plays PCM. Marketing can be intoxicating--if you do not come up for air, though, people will continue to deride you for it.

EOD

Link to comment
Share on other sites

"Technically speaking, bit depth is only meaningful when applied to pure PCM devices. Non-PCM formats, such as DSD or lossy compression systems like MP3, have bit depths that are not defined in the same sense as PCM. This is particularly true for lossy audio compression, where bits are allocated to other types of information, and the bits actually allocated to individual samples are allowed to fluctuate within the constraints imposed by the allocation algorithm."

A m00se bit my sister 24 times

24 m00se bit my sister once

A m00se bit 24 of my sisters

My sister bit a m00se

Link to comment
Share on other sites

So it's like oversampling which has been on CD players for 20 years.

We all know the 1-bit wars were basically marketing.

Not really. Here 24-bit, 16-bit, and 1-bit are characteristics of absolutely different processes, and cannot be compared. The 16-bit stuff is clear - a sound sample taken every 1/44,100 of a second contains 16 bits of information per channel. During compression/decompression, a stream of these samples is processed according to a lossy algorithm, and 24-bit words are used for calculations, presumably giving more precision than 16-bit ones. Whether this actually affects sound quality is not quite clear, since the actual bitrate of compressed music is the same (292 kbit/s). What may actually be affected is the calculation speed.

The 1-bit DAC concept is a bit more interesting, as it applies to various equipment. Imagine a good-old classic DAC, having digits at the input, and giving analog signal at the output. The input signal defines the level of the output signal 44,100 times per second. For 16 bits there are 2^16 = 65,536 possible levels. This means that you need 65,536 pairs of transistors and very precise resistors per channel to obtain the analog signal. And such DACs really did exist in early CD players, and still exist in relatively expensive equipment. For a "classic" 20-bit DAC the number of transistor/resistor pairs per channel increases to 1,048,576, making them prohibitively expensive for anything but the highest-level professional studio equipment. And "classic" 24-bit DACs with 16,777,216 transistor/resistor pairs simply do not exist because of current technology limitations.

On the other hand, in a 1-bit "DAC" the signal is not actually converted to analog at all. The output signal still has only two levels: +MAX and -MAX, is output at a much higher frequency than 44,1 kHz (in the MHz range), but has the same instantaneous power as the analog signal. After passing through analog output filters (if any), amplifiers, speakers, and your ear canals, it creates an impression that you are listening to the musuc. And this technology is dirt cheap.

Edited by Avrin
Link to comment
Share on other sites

Thanks for the explanation. So due to saving cost, the time-multiplexed 1-bit simulates a space-multiplexed n-bit, since the DSP chips, just like other CPU's can run orders of magnitude faster (than when MD was first invented). Presumably this is what Sony talks about with "phase shifts"? ie they can manipulate that sound image by playing with the sequencing of those bits......

Link to comment
Share on other sites

Interesting thread, although it appears its one for lock up.

Found this link which explains DAC history for cd players.

First DACs for CDs were 14-bit, then 16-bit came along, then 1-bit MASH and Bitstream. Then came SACD.

16-bit DACs are still expensive today.

Regarding 24-bit DACs

The extra bits used by these converters may be either thrown away, be left unused, or be put to other intelligent uses that will be discussed later. Unfortunately, it is a misconception that the use of an 18- or 20-bit DAC gives true 18 or 20-bit audio performance.

http://www.tc.umn.edu/~erick205/Papers/paper.html

In the end, minidisc DACs make no difference

Edited by netmduser
Link to comment
Share on other sites

Hmm my first CD player (1988) definitely had 20-bit oversampling. See here http://www.sakurasystems.com/articles/Kusunoki.html for some fairly deep stuff about this. 16-bit isn't quite enough resolution as we want a cutoff (Nyquist freq) of 1/2 of 44.1 kHz ie 22.05 Khz. Anyone who has played with sound spectra will know you see response up to 22.05 kHz when it's a CD.

So they did some trickery to use effective 20 bit resolution with the available 16-bit DACs. (of course computers like powers of 2, so the next logical DAC size over 16 is not 20 or 24, but 32 which is simply unfeasible). The article (written in 1997) seems to argue in favour of 16 bit non-oversampled.

The closest analogy might be like saying LP2 is better than SP because you don't have to play tricks to process the data up to its cutoff frequency?

So more isn't necessarily better. Some of the differences between high end MD decks and the rest are in the nature of the DAC and the way it is filtered on the out (analogue) side (Current pulse vs hybrid pulse). Whilst I don't really grok what that is all about, it seems related. You can use cheap digital technology as Avrin says, to interpolate to a higher resolution or you can use a simpler but more expensive system to get nicer sound.

Have I got that right?

Link to comment
Share on other sites

16-bit isn't quite enough resolution as we want a cutoff (Nyquist freq) of 1/2 of 44.1 kHz ie 22.05 Khz. Anyone who has played with sound spectra will know you see response up to 22.05 kHz when it's a CD.

This only means that the CD is incorrectly mastered. And this has nothing to do with 16 or more bits.

To create a perfect CD (or any other digital media), the signal must be prepared according to the Nyquist theorem, which requires that the upper limit of its frequency response be strictly less than half the sampling frequency. Since the preparation takes place in the analog domain (before the analog-to-digital conversion), and no analog filter is able to cut frequencies steeply to zero, the entire 20 - 22.05 Khz area is used for frequency cut-off. That is, the filter starts decreasing frequencies from 20 kHz, and by 22.05 kHz they have zero level. Remember that even the most expensive CD players have their frequency response limited to 20 kHz.

This filter is actually required by the standard (to make it possible to perfectly digitize the signal, and then perfectly restore it during playback). And CDs made in 1980s (and later, if we talk about Japanese issues), actually have their frequencies cut off after 20 kHz.

Nowadays, noone cares about sound quality, so no filter is used, and we have CDs with ranges going all the way up to 22.05 kHz. These CDs do not comply with the requirements of the Nyquist theorem, thus noone guarantees that they play as they should.

As an example of a properly mastered CD - look at the frequency response of the first track of the first Japanese release of Michael Jackson's "Bad" album:

beba2e0345a0t.jpg

Edited by Avrin
Link to comment
Share on other sites

Did I ever mention that the original CDDA standard provides the means to increase the effective dynamic range of higher frequencies to almost 18 bits? This technology is long forgotten. It was called pre-emphasis. But any CDDA-compliant device must still support it.

In ordinary music the amplitudes of various frequencies generally decrease progressively with the increase of the frequency value. This makes it possible to apply an analog filter (strictly defined by the standard) that increases amplitudes of higher frequencies, while still keeping them below the maximum allowed amplitude. After that, the signal is converted to digital. During playback, after digital-to-analog conversion, a reverse filter (again strictly defined by the standard) is applied, bringing higher frequencies back to their normal levels. This helps to minimize the effects of digital quantization noise, which mostly affects higher frequencies, effectively giving them 10 dB more in dynamic range.

An example is Michael Jackson again, this time the first track of the first Japanese release of "Thriller". Red - pre-emphasized signal ripped directly from the CD. Green - de-emphasized signal.

f1caa50fa830t.jpg

Actually, CDs with pre-emphasis are really rare, and those that exist, mostly come from Japan.

When played on a computer by a non-pre-emphasis-aware software player, these CDs will have loud and ringing high frequencies, that are not very pleasant to listen to.

Luckily, SonicStage is fully aware of pre-emphasis, and compensates for it when playing/ripping. The MD standard also provides a pre-emphasis flag bit on the disks. Whether it was actually used for production is not known. The requirements for presenting digital material to DADC state that pre-emphasized material is accepted, but:

Please note that de-emphasis will be performed during format conversion at DADC
Edited by Avrin
Link to comment
Share on other sites

This only means that the CD is incorrectly mastered. And this has nothing to do with 16 or more bits.

To create a perfect CD (or any other digital media), the signal must be prepared according to the Nyquist theorem, which requires that the upper limit of its frequency response be strictly less than half the sampling frequency. Since the preparation takes place in the analog domain (before the analog-to-digital conversion), and no analog filter is able to cut frequencies steeply to zero, the entire 20 - 22.05 Khz area is used for frequency cut-off. That is, the filter starts decreasing frequencies from 20 kHz, and by 22.05 kHz they have zero level. Remember that even the most expensive CD players have their frequency response limited to 20 kHz.

This filter is actually required by the standard (to make it possible to perfectly digitize the signal, and then perfectly restore it during playback). And CDs made in 1980s (and later, if we talk about Japanese issues), actually have their frequencies cut off after 20 kHz.

Nowadays, noone cares about sound quality, so no filter is used, and we have CDs with ranges going all the way up to 22.05 kHz. These CDs do not comply with the requirements of the Nyquist theorem, thus noone guarantees that they play as they should.

Hahah so when I read something in from MD -> PC I inadvertantly do the cutoff at high frequencies because ATRAC throws those away, and actually miss having to do this step. That's amazing. Explains why the whole process I evolved (by trial and error) for re-mastering analogue recordings via MD works as well as it does.

Link to comment
Share on other sites

Nope. Higher frequencies must be cut off before the signal is converted to digital. Otherwise the signal that doesn't conform to Nyquist requirements is further mangled by lossy compression.

Even if you don't use any lossy compression, but apply a digital fequency filter to an already non-compliant signal, it will not improve it in any way. I.e., it will not make it compliant.

Edited by Avrin
Link to comment
Share on other sites

end of the day, my standard answer, listen and see if you like it, if so then it is good enough

Bob

AFAIC, Bob, this is indeed what counts most. You can look at all the numbers you want to and determine which unit should sound best, but in the end it comes down to whatever sounds best to the individual listener - which may not match what's supposed to be "best." I don't know why it should be like that, but so it is.

Link to comment
Share on other sites

AFAIC, Bob, this is indeed what counts most. You can look at all the numbers you want to and determine which unit should sound best, but in the end it comes down to whatever sounds best to the individual listener - which may not match what's supposed to be "best." I don't know why it should be like that, but so it is.

I think it's very natural. You only run into "difficulty" when someone won't believe their ears, but they think having the highest spec somehow raises their stature.

There's this funny little plug for the NPR show "Wait..Wait, don't tell me--the NPR News Quiz". At the end of it he says, "It'll be the most fun you've had since bragging about your S.A.T. scores." I know--doesn't seem relevant, but it just popped into my head.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...