-
Posts
2,462 -
Joined
-
Last visited
-
Days Won
1
Content Type
Profiles
Forums
Downloads
Everything posted by dex Otaku
-
Hints: How to Control Your Levels and Make Undistorted Recordings
dex Otaku replied to dex Otaku's topic in Live Recording
Addenda: While testing my equipment today [had a problem with my mics which has been fixed - but it's time for new mics] I took a moment to actually listen critically to the difference between AGC's STANDARD and LOUDMUSIC settings. STANDARD mode: has no hold and a release which lasts about 2-2.5secs to return levels to normal. LOUDMUSIC: has an infinite hold. I listened for about 90 seconds and it still hadn't released since the intitial tap I made on the mic. Now .. I never thought about this before. STANDARD mode might be compressing/limiting .. and LOUDMUSIC might be just setting the levels according to the loudest peak that occurs - without compression. Intriguing. -
Sony Hi-MD Walkman MZ-RH1 Pictorial
dex Otaku replied to Ishiyoshi's topic in Product Reviews/Pictorials
USB power = USB power. The older units with standard mini-B plugs will not work because as soon as they see power [via USB], they expect that they're plugged into a computer [as already described]. The voltage/current thought [as described above] is incorrect, though. If Sony had used non-standard power over the USB connector, it would have been illegal for them to even refer to the device as USB [since they'd be breaking the standard]. -
I've written the same instructions as above more than once for others myself. Kudos for starting a thread for it. My own advice? Partial echo of yours: * Tag things properly; this includes the "various artists" flag which not all software respect or use, proper capitalisation [the rules CDDB give you are crap - don't capitalise every damned word! LEARN SOME F*ING GRAMMAR RULES, people], correct spelling, &c. * DO NOT TRUST CDDB OR FREEDB! A fair majority of entries in their database are simply junk. * USE the date-released tag * Check that what you've ripped or imported is correct in the libarary BEFORE downloading it and then coming here to complain about how SS never imports things in the correct order - chances are, it was either your fault or the fault of the software your ripped with; either way, if you don't verify that things are correct, the problem is *your* fault, not Sony's! * USE your BRAIN! There's my happy morning message for all of you.
-
RE: Normalisation - this is sort of what you want, but not exactly. Normalisation usually [with most software] refers to PEAK normalisation, which is -not- what you want. Sound Forge has an option for RMS normalisation as well [most other programs either lack this completely or call it something else] which -is- what you want. Peak normalisation just sets whatever the peak level in the file you hand it is to whatever new level you specify. An example - if the peak level of a particular live recording is -10dBfs, and you tell it to normalise to -0.2dBfs, the software just raises the volume of the entire track by 9.8dB. RMS normalisation actually analyses the file and figures out what it's peak RMS value is [like average volume]; it then adjusts the volume of the entire file either up or down using that RMS value as the reference [average rather than peak volume]. RMS normalisation also usually includes an option to apply dynamic compression should the peak volume after initial processing exceed 0dBfs; this is basically an implementation of bit-pushing. Anyway.
-
Update after nearly a year of using the RH10 [and nearly 2 of using the NH700]: * The RH10's face does scratch easily, but it also polishes easily [i use CD scratch repair polish, it works wonderfully]. * After almost a bear of near-daily use, I can say the build quality of this unit is great. Nothing is loose, nothing rattles, everything is pretty much as it was when it first arrived except for cosmetic wear around the corners. * I use the unit at least 1/3rd of the time with the AA sidecar. The sidecar is as durable as the unit. I have no wear problems with either, even with regular use. * I fully expect the display to outlive the optical block. [This is not to say I'll be right, mind you.] * I prefer the sound of the NH700 when listening [the RH10's bump in the high end makes it slightly harsh to my ears, but note that I am hypersensitive in this regard, so it won't bother most people] * I carry the NH700 [LCD] for daytime recording, and the RH10 [OLED] for nighttime recording. The NH700 is also great for times when a lit display is inappropriate or too obvious. * [side-note] I also carry a ziploc sandwich bag for when it rains. It's hard to use the jog dial or roller of either unit through the bag, but it does save the unit from humidity. You can read the OLED through the bag quite easily. I have no real complaints about either unit.
-
I can't compare the RH10 vs. RH910, but I do have both a NH700 and RH10. I have had the RH10 for nearly a year and have had no issues with it. My one initial complaint about it was cosmetic - that the face scratches too easily. Once I got over being a tightarse about it and just let it get scratched as it wanted to, it wasn't an issue any more. I've polished the face once in the past year and the scratches come out very easily. The display itself is unscratched because the buttons are situated in such a way that pressure across the face of the unit falls mostly below the OLED. The OLED display works perfectly and has no issues. As can be expected, it is difficult to see under direct sunlight, but this is a fairly obvious tradeoff that any buyer should be able to realise before purchasing. This is part of why I tend to carry both my units if I'll be recording under non-controlled/unexpected conditions - the NH700 is perfect for daytime use, the RH10 for nighttime. Both have identical input sections so recording quality is limited by the tolerances of the parts used to build them, rather than how expensive each model was. I have no issues with loosening buttons, jacks, or the battery cover. I use my AA sidecar regularly and it has no issues either. I tend to switch off between the RH10 and the NH700 for portable listening. The RH10 can be great for making fast discs of MP3s without having to transcode, though EQ is mandatory for proper listening. For consistency's sake I now usually transcode all audio regardless of originating format to a3+ just so I don't have to screw around wtih the EQ and its presets while walking. Oddly perhaps, I find the sound of the NH700 more pleasing when listening. It has slightly lower output but is also flatter, especially in the upper-midrange/lower treble where the RH10's digital amp has a slight 1-2dB "bump" upwards which sweetens the sound to most people's ears, but to mine just makes it harsh and grating. Build-wise, I have no complaints about the RH10 other than the cosmetic one of how easily it is scratched. Mine at least is solidly-built, and has taken a few tumbles without issue. I fully expect the display to last longer than the optical block, or for that matter, the format.
-
[banging head on desk]
-
Good point, but you still missed my point completely.
-
base standard = lowest common denominator. It's like saying, "for recording on units with gen1 of this encoder, this will be the sonic profile of every recorder." This includes everything from the quality of the ADC used to the quality of the analogue sections of the input to the quality of the encoder. Everything in the stream has its own effect on the final output. This is why [iMO] every HiMD recorder that has come out thus far has exactly the same input section. Same ADCs, same preamps. The codec might change slightly between models released at different times [if they've continued working with it, it should hopefully improve with newer models, at ATRAC SP did with the original MD], but my guess is that this hasn't happened yet [HiMD is only 3 years old]. If such is the case, everything in the recording section of every HiMD model made thus far is identical, so recordings made on each models should only vary slightly based on the mass-production tolerances of the parts they use. Base-line, or base standard. Lowest common denominator. There is only one quality.
-
There are several possible reasons why, and they are not mutually-exclusive: * a desire to ensure a base-standard for encoding quality from a live source * the cutoff of the unit's ADC * the limits of the unit's input preamp My thought is that it's probably a combination of all three. Each section before encoding even takes place has its limits.
-
Nice to see this ancient debate still taking place. Personal feelings: Because I am abnormally sensitive to certain kinds of artefacting [including the kind commonly experienced as warbling sounds with lossy audio, and temporal artefacting of video which makes most digital cable and satellite video unwatchable for me] I know very well where my own thresholds are. I have two differing policies, one for recording, one for playback. Playback: 48 and 64kbps a3+ sound pretty much the same to me. I use 64kbps for encoding mono voice recordings such as radio programmes and transcoded podcasts of documentary radio work and the like. Most of these recordings are already bandwidth-restricted with a high cut around 10kHz [or encoded as mp3s with low sampling and bitrates], because they are primarily voice to begin with. In this case, both 48 and 64kbps work fine for me. For music, 192kbps is my threshold of listenability with both mp3 and a3+. I also find a3+ to be slightly better at 192kbps than mp3 is [primarily because it uses a higher-resolution FFT algorithm than mp3 which makes artefacting less unpleasant even when it's present]. [side note: well-tweaked mp3 can sound good even at 128kbps. I have found exactly one album that does, and it's odd that it does, because the sonic makeup of the music itself does not lend easily to low-bitrate encoding.] Most of my listening discs are made with 256kbps-encoded tracks. The ones I've made recently are almost exclusively encoded at 192kbps [including transcoded mp3s].. since most of my listening discs are HiMD-formatted MD80s, this means I can get another full album or so on each disc. This makes, for me at least, for near-perfectly sized single-artist compilations - there enough on each disc to make for a good long listen, but there also little enough to make remembering exactly what's there doable. Quality-wise, 256kbps for portable listening is transparent to me. I have no complaints about it. I have tried 352kbps and the difference isn't enough to justify the loss of capacity, time-wise, per disc. I use 352kbps or PCM for copying things to take to others for auditioning purposes [i.e. "Hey Joel - what do you think of this mix?"] but otherwise I consider them overkill for my own listening purposes. Whoever made the comment about metal being easily-encoded at low bitrates has obviously never done any codec testing: music with lots of layered, distorted guitars [square waves] mixed with drums, cymbals, and vocals is far more difficult to encode accurately than most other content, because the sheer density of the sound [especially complex harmonics like those making a square wave - and for each element, let alone all of them together] in the mix is higher than with, say, a string or jazz quartet. Note that this is taking into account only the audible artefacting caused by a given codec and bitrate, and not the effects on soundstage and overall timbre [which are related but not exactly the same thing] cause by the same. Also, the idea that bitrate alone determines quality is completely false. There are a number of variables involved here. For one, higher bitrates do mean less is thrown away, yes - but a newer codec that does higher-resolution analysis [especially of a higher-resolution input stream such as 24/96 LPCM] is bound to do better at lower bitrates than an older codec does. Another is that the maturity of a codec will have a noticeable affect on the efficiency with which it can encode without artefacts; ATRAC SP falls into this category, and at this point I'd say with Type-R it's encoder is probably optimised close to the limits of the format, which explains why a3+ was created in the first place. In the end, the question is nearly moot - since they are different codecs, with different encoding methods used, each has its own profile of what conditions [sound] it will handle [encode] better [without audible artefacts]. Point being - there's no simple answer to this question; SP will do better under certain circumstances than HiSP; HiSP will do better under others. Which you choose to use is up to you and your ears. I expect that I will be using 192kbps most of the time from now on. Recording [on-unit]: 256kbps is my absolute lowest baseline. A 1st-generation recording made in 48 or 64kbps is completely, utterly worthless if you have any plans to edit it and re-encode it in any lossy format later on. I don't even make voice recordings that I know will only be edited and/or listened to on the unit itself at HiLP, because any kind of background noises picked up by the omni mics I'm usually carrying will make the artefacts very distracting and voices nearly unintelligible. Recording voice only with a close directional mic, a high-pass filter, and a pop screen [similar to the conditions in a studio booth] does work well with HiLP, though. Aside from this, even a reformatted MD80 holds 2:23 at HiSP. Most interviews or similar things don't even run that long, so there's little justification to use lower rates because of time concerns. My interests when recording are to get the best-quality, clearest, unaffected 1st-generation recording possible. This is part of my rationale for using manual levels as well as LPCM mode. When time concerns force a compromise, I use HiSP. Any "end-product" encoding I hand someone has gone through no more than 2 generations of lossy encoding, and the quality does not suffer as a result. As with any data-reducing encoding format, there is a "density threshold" for artefacting; in the case of sound recording, the overall [call it aggregate, even] complexity of the sound being recorded determines the artefacting threshold at a given bitrate. 256kbps is high enough that almost everything is transparent, with obvious deficiencies also being relatively well-tested [such as trouble handling transients because of limits in the FFT window size et al]. I am also hard-pressed to tell a difference between SP and HiSP; it's not that one is necessarily all-around better than the other, it's just that they are both transparent under most situations and their artefacting "profiles" are simply different [because of higher-res FFT and differing window sizes and such]. I choose to use HiSP because with the recorders I have, it can be uploaded by digital means. If I had an RH1 I might consider using SP mode for recording, though I'll also point out that the difference in formatted disc capacities [since SP can't be used with HiMD-formatted discs] is large enough [80 mins vs. 2:23] that any possible difference in quality is hard to justify unless the recording itself is expected to fit on a single disc. That said, If 352kbps was available for on-unit recording, I would use it.
-
I'd say it's a matter of priorities. The idea with location/live recording at a rate that is supposed to provide transparent encoding [i.e. no or as few as possible audible encoding artefacts] is to give priority to the most important [i.e. audible] bands, in the range between about 500Hz - 5kHz [the vocal range, where most musial instruments also present most of their energy]. Psychacoustic data reduction is intended specifically to take advantage of average characteristics of human hearing. It's well-established that a fair majority of people 25+ years old can't even hear 19.5kHz as it is, so the bandwidth that would have otherwise been consumed encoding something that most people aren't going to hear is devoted to the range that has the most energy in it. In other words: setting a "decent compromise" cutoff for the highs also means setting the average amount of artefacting that occurs in the bands where the most energy usually is, and which are the most important to present with as little distortion [artefacting] as possible. This is not to say that "all recording should only be done to 19.5kHz because people are deaf." This is to say that this is a consciously chosen compromise made to ensure a base-level of quality during realtime encoding.
-
http://forums.minidisc.org/index.php?s=&sh...indpost&p=95858 http://forums.minidisc.org/index.php?s=&sh...indpost&p=95884 Nice work, otherwise. You are a welcome addition to the fora.
-
I'll say this again, just so it's clear: This is not a support thread. Do not post your questions here. There's a reason why we have a software support forum. If you have specific info about SS's uploading or downloading behaviour, feel free to add it here. Please do test things [at least minimally, as I did] to make sure that a particular behaviour is consistent.
-
This isn't a question for this thread, since it has nothing to do with this thread. Please post such questions in software support in the future. Short answer: CD importing *should* bring tracks into the SS library in order. If you open the album in the albums view and tracks are randomly ordered, just click on the column header for "track number" .. this will force it to re-sort the album. Clicking once will sort one way [ascending or descending] and clicking it again will do the opposite. Do it once or twice depending on which way it sorts first. Downloading the album after re-sorting should put the tracks in correct order on your MD/HiMD. As for albums already downloaded, you'll have to sort them manually.
-
Thanks for the research/testing, Avrin.
-
Wow. I admire the thoroughness of your testing, but can't think of anything to suggest. The problem itself doesn't actually make sense, does it. In any case, I have used disc images many, many times [i make them for people after recording them], as well as actual CDs, MP3s, split WAVs that were gapless to begin with, &c. but have only run into the moving trackmark issue in one instance, though unfortunately [as I said already] I can't recall exactly what it was .. I have a feeling it was when using split WAVs of contiguous sources, but I'll have to try that to confirm it. I do know [because I use it so often] that disc images work fine [gapless and trackmarks in the correct spots] at my end. The fact that you've had this problem since much earlier versions of SS is also quite suspicious. Have you tried .. * checking your system from copy-protection software that might affect the reading of any/all CDs * disabling any/all drive monitoring software if present * disabling any/all utilities such as CloneCD [which use rootkit-like methods, among other things, to simulate being actual CD/DVD drives] * basically - disabling anything and everything that might have any effect on reading from CD/DVD drives ... not much else I can think of. Unless you feel like sending one of those CD images to one of us for testing purposes.
-
Last I remember, this was always the case [tracks downloaded from SS can't be edited in any way, including titles &c.] but perhaps I'm wrong.
-
Please read and follow the submission guidelines. .. the ones in the software forum, that is. The more info you give, the more likely the problem is to be solved. Cheers.
-
Aaaaahhhhh.. thanks, volta.
-
Thanks. I'll act the clueless noob for once [rather than spending 8 hours writing a single post as I did last night]. [looking through service manuals] .. yes, they do have the same h/p amp. My mistake, based on the product literature which for some reason [in every example of both that I've found .. again] states that the NH1 has hd and the NH90 is simply "digital." Bloody marketing people. It would be interesting to test them side-by-side.
-
Are you absolutely sure about this? Way back when I ran the level comparison tests for various country codes on my Canadian-model NH700, setting to the/any "euro" code would limit the output, and setting it to North America / Asia would uncap it. Period. Are you saying that there's a difference in how much a change can take place when comparing 1st and 2nd gen units? All I recall is everyone thanking the ones responsible for posting the country codes et al because their [previously eurocapped] units weren't [eurocapped] after making the change. The same also applied to 2nd-gen units, from what I recall. What I specifically don't recall is anyone discussing the kind of difference you're describing between 1st and 2nd gen units. Maybe I missed something [as I never attempted changing/testing with my RH10]? Or - if there was some noted difference [that I missed] between g1 and g2, perhaps it was because all 2nd gen units use the same digital amp, whereas 1st gen was split between hd-digital [NH1], digital [NH900], and analogue [NH600, NH700, NH800F].. They do *not* put out the same amount of power at the same impedance. Curiouser and curiouser.
-
Um. So, what about the threads about "removing the eurocap" from older models, the threads about the country codes for HiMD models, the threads about how to get into service mode and what option is the country code [and how changing that sets the volume cap of the headphone amp]?
-
There's even better reason to do this as a recordist, actually - having the full power of the headphone amp available makes live monitoring much, much easier [since what you're recording will often be of a far lower average level in dBfs than any recent pop recording]. Oh, and .. I don't know how to do it. Getting into service mode is likely the same as previous models, but I wouldn't change any settings without knowing for certain what they are beforehand.
-
aha. End point being: whatever directshow/ACM encoder you have installed is what most things will use. Goodgood.