Jump to content

dex Otaku

Limited Access
  • Posts

  • Joined

  • Last visited

  • Days Won


Everything posted by dex Otaku

  1. The encoder your machine is set up to use through WiMP is InterVideo's .. I thought a version of the FhG codec actually came in the WiMP codecs pack, though. I guess I'm wrong on that - though I'm surprised the InterVideo codec is what's there. That doesn't seem like a likely default to me. Again, I could be wrong - I don't pay attention to what codecs come with certain pieces of software [like SS - I install FFDshow and force everything to use it; and WiMP - I have opened WiMP perhaps 10 times in the past 3 years, each time to play a streaming WMV that VLC didn't like .. WiMP is the antithesis of usable software, IMO - it makes SS look sensible and efficient, not to mention resource-friendly]. The last time I saw a good comparison of MP3 encoders was several [~5] years ago .. have not seen any such comprehensive comparisons done since. I'm not investing the time in it, myself. Still, I'd seriously doubt there'd be any perceivable difference between the encoders. You could always try encoding a couple or three tracks of music that you're -really, really- familiar with using both codecs and seeing if you can detect anything with an ABX or something.
  2. There are a few grammos in there, but as for the links - I have the original [text] document open here, as it was saved immediately before pasting into the "post thread" page here .. and all the links are correct in that document. The board appears to have added "forums.minidisc.org" to the beginning of all of them for some reason .. though wait .. the first few links were done with BBCode, the rest were straight HTML .. which means that either the board or the server is adding that to each link there as html. Is that canonical linking? I can't remember. Content addition: When I'm talking about mic sens settings up there [aka preamp gain settings] and suggesting what low and high sens can be used for .. here's a note on that .. when I say "low sens" is good for conversation, I mean .. between a few people who are a metre or two away. It's important to remember that distance from the source is an important part of things. Trying to pick up conversation [or, say, a lecture] with someone who's speaking more than a couple of metres away from you is a job better suited to "high sens" mode, for instance. Common sense comes heavily into play with so many of these things. There are few constants involved in recording; every venue/location is different, every source or subject has its own characteristics. It pays to know a bit about the parts that remain the same [like how to operate your recorder, how sensitive your mics are and what the meters read at a certain perceived loudness...] RE: the links, someone else will have to repair them since I can't.
  3. For audio CD creation, I use dedicated software such as CD Architect. I still use Nero for most other burning needs. I just do a custom install, omit the components I don't need, and open Nero directly rather than using the wizards [which just annoy me].
  4. Preface: As with most of the things I write, the length of this is sure to seem daunting. To those of you who are intimidated by such a thing, there's an easy solution: don't read it, and likewise don't bother to learn how to do a better job of recording. Please forgive the odd formatting [strange varying font sizes for no reason] – it appears to be something that the board's skin imposes on html blockquote sections, which I can't get rid of without manually setting the font size of every section. Bad CSS, BAD! <hr noshade size=1> Member Navaro sent me this message last night [reposted here with permission]: There are a few questions I can round down from his messages and attempt to answer here. Anyone who can come up with more correct or clearer answers, please do add to this thread. <ul><li>What is the difference between mic-in and line-in?</li><li>What is AGC?</li><li>What is the gain setting for and what does it affect?</li><li>Why should I use manual levels instead of AGC?</li><li>What level should I use when recording with manual levels?</li><li>What is a battery module for?</li><li>What's the practical difference between a battery module and an attenuator?</li><li>What's an external preamplifier for?</li></ul> Some of my responses contain the answers to the other questions, so there is repetition in here for those who actually read the whole thing. <hr noshade size=1> <blockquote><i>For those who are interested in more details about the quality of HiMD recorders' mic preamps, see this article at the Wildlife Sound Recording Society's website, and its two related articles, noise performance and gain and overload performance, as tested with an NH-700.</i></blockquote> <hr noshade size=1><blockquote><b>[note: all of the following applies only to recording via analogue inputs]</b></blockquote> <b>What is the difference between mic-in and line-in?</b> <blockquote>The first difference between these two is that the mic input has a preamplifier made to bring microphone-level signals [millivolts-range] to line-level [around 1V peak-to-peak]. Most MD and HiMD recorders have a low and high-gain [or sensitivity] setting. Under most circumstances – unless the sounds you're trying to record are rather quiet [since high-sens adds another 20dB of gain to the preamp] – the low-gain setting should suffice. The second difference between the two analogue inputs is that the mic-in has “plug-in power” on it. Plug-in power is meant to power electret condenser microphones such as the SP-BMC-12s being used by Navaro [or the similar SP-TFB-2s I use]. condenser microphones require bias power in order to work; HiMD recorders supply between 1 and 2.5V on the mic input for this purpose. The line input is made to take line-level signals [around 1V peak-to-peak]. It has a very limited amount of gain as it is made to take signals that have already been preamplified for use with a recorder or amplifier. Because there is no “plug-in power” on the line input, electret condenser microphones will not provide any signal if plugged in there.</blockquote> <b>What is AGC?</b><blockquote>AGC means Auto Gain Control. Some companies refer to the same thing as Auto Level Control [ALC] among other things. The basic idea behind AGC is that when the volume get too high, the recorder does the job of “ducking” levels automatically to avoid distortion in your recording. In more technical terms, AGC is a basic implementation of a compressor/limiter, or [more likely, in my opinion] a dynamic compressor [that being a compressor whose ratio increases as levels increase above its threshold]. There are two modes with Sony's AGC on MD and HiMD recorders: Standard and LoudMusic [or ForLoudMusic depending on your model]. The most likely difference between these two settings [i could be wrong on this, but this is my guess based on the difference in how they sound] is respectively a short release time versus a longer release time. In situations where the volume is consistently loud, the difference between them shouldn't be very obvious, but in situations where the volume varies greatly over short periods, LoudMusic is likely a better choice since the compression taking place should be less obvious [by avoiding the “pumping” effect of a fast release]. <b>The advantage with AGC is that it's a relatively “fire and forget” way of doing things. You can start recording and never mind setting levels or monitoring the meters.</b> <b>The disadvantages to AGC mostly revolve around having to rely on how the fixed gain of the mic preamp relates to whatever the sensitivity of your microphone is, as well as the difference between the softest and loudest sounds [dynamic range] you're recording.</b> That may sound complicated, but what it amounts to is this: <b>using AGC means you're stuck with a default that doesn't always work, and in many cases will make things substantially worse rather than a bit better.</b> This is another prime example, in my mind, of something that makes things much harder in the end even though it's intended purpose it to make things simpler to begin with. <b>In some cases, the default works quite well. For example, </b>with my SP-TFB-2s, with the mic preamp set to low-sens, the average level of conversation falls just below the AGC's threshold; this means that any sounds ranging from quiet to conversation-level are uncompressed and sound completely natural, whereas anything much louder than conversation gets compressed/limited to prevent distortion. This is pretty much an ideal case for using the AGC. <b>Another case in which the AGC works well is for what I would call broadcast-ready recording</b> [hopefully no broadcast recordists take offense to this]. In these situations it's usually more important to capture the sound, period, than to capture the sound with high fidelity and a natural dynamic range. Using a monaural omnidirectional microphone and the high-sens setting, a reporter's recording captures all of the sound in a media scrum or of an interview. In this case, though – all of the sound is highly compressed, and it's obvious when it's played back. Again, this is pretty much an ideal case. <b>Most live+amplified music recording situations work <i>very</i> poorly with AGC. </b> Acoustic recordings can work great if you're distant-mic'ing, but highly-amplified music or even mildly-amplified in a small space can quickly push levels into solid compression, basically eliminating <i>any</i> sense of dynamic range in the recording. The results tend to sound quite unnatural, though of course this is as much a matter of preference as anything; some people actually like their recordings to be constantly and consistently at top volume. For some listening situations, this is actually appropriate, too – such as when listening in a noisy vehicle, or with very cheap, low-power portable stereo systems. As a last note to this section – be sure to read the section <i>When recording with the line-in:</i> at the end of my answer to <i>Why should I use manual levels instead of AGC.</i></blockquote> <b>What is the gain [mic sensitivity] setting for and what does it affect?</b><blockquote>The gain setting sets the mic preamplifier's gain. High-sens mode adds another 20dB of gain compared to low-sens mode. With most of the microphones we use, the following tend to be true:<ul><li>The low-sens mode is appropriate for most recording conditions with sounds ranging in level from conversation to loud music</li><li>The high-sens mode is appropriate for recording quiet sounds or as suggested in the example above [broadcast-ready recording]</li></ul> There is considerable overlap between the two ranges if you're using manual record levels, but with practise it becomes reasonably easy to recognise which is better for a given situation with your equipment.</blockquote> <b>Why should I use manual levels instead of AGC?</b> <blockquote><i>When recording with a microphone:</i><blockquote>First off: this is as much a matter of personal opinion as it is something that depends on exactly the conditions under which you're recording. <b>The biggest reason I can give is that judicious use of manual levels will <i>always</i> give a more natural-sounding result.</b> This is a more purist [dare I use that word] approach to recording since the first generation is thus completely unprocessed [which is always a good thing, in my opinion]. <blockquote><i>Side-note:<a href="http://www.digido.com/portal/pmodule_id=11/pmdmode=fullscreen/pageadder_page_id=59">How To Make Better Recordings in the 21st Century--An Integrated Approach to Metering, Monitoring, and Leveling Practices</a>, from the <a href="http://www.digido.com">Digital Domain website.</a></i></blockquote> The next biggest reason is that we're recording with digital equipment, here. <b>Given that the 16-bit quantisation gives a usable dynamic range of 96dB, and that most of the microphones that HiMD recordists tend to use have self-noise which is louder than the mic preamps in their recorders [the total usable dynamic range with HiMD is above 85dB], even if you crank your recording levels <i>down</i> you're still going to end up with a recording that has a low noise floor and a natural dynamic range.</b> A prime example here: professional digital recording equipment's VU meters follow one of several reference standards for measurement. My preferred measurement standard is that established by the European Broadcast Union [EBU – which I've seen used by most film recordists, who follow the most rigid standards in the world of audio], which sets 0VU at around -20dBfs. Yes, you read that right .. <b> -20dBfs</b>. Going by the good old “standard” methods of record-metering, that means that you ideally want your average level while recording to be at 0VU, and the case of EBU metering, that gives you a whopping 20dB of headroom. Even the record meters on MD and HiMD are relatively conservative in this regard: that centre hash [consider it the 0VU mark] on the meter is -12dBfs. Directly from the RH10 operations manual: [attachmentid=1641] What does this really mean to recordists? It means – if you set levels manually according to your recorder's meter, with an average around the 0VU mark, you'll have at least 12dB of headroom. That means 12dB more volume above what you've measured for <b>without distortion</b> [taking into account the limits of both your mic and preamp, that is]. I, personally, try to average a bar or two below the HiMD 0VU mark, giving about 20dB of headroom. Of course, when you get this home, upload your recording, and play it back – it's going to be pretty quiet compared to your favourite recent pop recordings off CD. Scroll back up and read the article at Digital Domain. You'll see why – it's because your monitoring [i.e. your amp+speaker setup] is set to listen to “average” listening material at “average” volume, which [in my opinion, though it does depend on what the content is] doesn't reflect good recording [or mixing, or mastering] practises in any way, shape, or form. So – to try and sum this up, a suggestion to recordists: <b>you have a digital recording medium, with a decent mic preamp and a huge dynamic range – <i>****USE IT!****</i> Save the bit-pushing for the mastering stage, if you must do it at all. After all – GIGO – Garbage In, Garbage Out. If your recording starts off compressed or distorted, then it will end up compressed or distorted. </b> <b>The lesson is – USE YOUR RECORD METERS. If you have a non-backlit LCD display, carry a LED flashlight. Or, if you have to pocket your recorder before heading into a concert because you're “stealthing” </b>[i'll leave the lecture on honouring IP/copyright to someone else this time], <b>set your levels conservatively</b> [noting that knowing what level is “right” for your equipment is something you have to learn by experience], <b>and learn to use the tools you have in post-production</b> [such as non-linear audio editing software with settings or plugins for gain and dynamics processing]. <b>Likewise, there are cases when you <i>know</i> very well that what you're recording is simply not going to get much or even any louder than what you're metering to begin with, so there's less need to leave massive headroom.</b> Use your own best judgement, and remember another old saying – <i>better safe than sorry.</i> If you go for the cleanest possible recording to begin with, you'll have a first generation that is highly usable in the editing and mastering stages, that you can process any way you like – including adding compression, EQing that bass that was too loud because of your cheap microphones, and what have you. As a last note – using manual levels is not perfect for every situation. There will be times when you'll need way more headroom than expected because certain sounds will be very loud [resulting in clipping distortion if you don't or can't address the problem]; there are others when your average level is quieter than expected, and you'll find the [albeit usually rather low] noise floor audible after editing. The solution here is to experiment and figure out for yourself when it's most appropriate to use one method or the other – not to mention how with practise, you'll learn where to set levels with your equipment setup simply “by ear.”</blockquote> <i>When recording with the line-in:</i><blockquote>The same basic principles apply with line-in recording as for mic recording, but there are a few reasons why sometimes it's appropriate, not to mention more convenient, to NOT use manual level control. A perfect example of this: while at a show and making a recording directly “off the board,” I almost <i>never</i> use manual levels. Most mixing desks either allow you to plug into your recorder through a separate “tape out” feed or by using one stereo bus just for recording. Either way usually allows you to set the output level before it hits the recorder – meaning you can set your recorder running without ever touching its level controls, and then set the output level of the board to read with a good average – while still leaving plenty of headroom – according to your recorder's meters. This has a pretty serious advantage to it: if you leave plenty of headroom, you'll still capture the dynamic range of the performance, but – you'll also have the AGC there to catch things should levels suddenly jump too high. This is the best of both worlds, really: you've set your levels manually [using the controls on the sound board], but you still have the safety-net of the AGC there to prevent outright clipping distortion should things suddenly jump in level. Ideally, Sony would enable this as an option on their recorder – after all, the hardware is already in there to do it – that is, to use manual levels but still have the bonus of overload protection. A menu item for “manual levels limiter” or something sure would be nice, especially considering the fact that this is the main reason so many inexperienced recordists try manual levels and then decide to do without – because as it is now, it's totally unforgiving [i.e. it results in unrepairable clipping distortion] if you don't leave sufficient headroom.</blockquote></blockquote> <b>What level should I use when recording with manual levels?</b><blockquote><b>There is no straightforward answer to this question.</b> Every microphone [model] has a different sensitivity, every venue or location has its own acoustical characteristics, and every amplification system has its own volume. We can't guess as to how loud the PA will be at your local hole-in-the-wall punk venue, nor how loud an acoustic act will be from the fourth-row centre table at your local jazz club. Not to repeat myself, but <b>USE YOUR RECORD METERS. If you have a non-backlit LCD display, carry a LED flashlight. Or, if you have to pocket your recorder before heading into a concert because you're “stealthing,” set your levels conservatively and learn to use the tools you have in post-production.</b> When you drive, you don't just steer blindly; you [hopefully] look where you're going. You also look at instruments like your speedometer, and check your mirrors to see what other traffic is doing. To think that doing the best possible job of recording should be fire-and-forget is, I would say, unreasonable. <b>It takes some effort to do a good job, but that doesn't mean it has to be difficult.</b></blockquote> <b>What is a battery module for?</b><blockquote>Battery modules are for powering <a href="http://en.wikipedia.org/wiki/Microphone#Capacitor_or_condenser_microphones">condenser</a> and, more specifically in most of our cases here, <a href="http://en.wikipedia.org/wiki/Microphone#Electret_capacitor_microphones">electret condenser</a> microphones. Most other types of microphone do not require power in order to function, and some contain their own power supplies [such as the single AA battery in used by the Sony ECM MS-907, or the 9V battery optionally used by the Rode NT4 when you can't supply it with 48V phantom]. Different microphones have different bias voltage requirements. Stage and studio microphones tend to use 48V <a href="http://en.wikipedia.org/wiki/Phantom_power">phantom power</a>; location-recording equipment used by film sound recordists and broadcasters often use 12V phantom power. These are the two most common formats used by professional [balanced] equipment. Smaller, portable microphones as usually used with MD and HiMD recorders work in the same fashion, but due to the requirement of portability, are made to work optimally with lower bias voltages. Most of the elements we use are made for an optimal bias of around 10V, ideal for use with a 9V battery. The 1-2.5V supplied from most HiMD recorders is sufficient to make them work, but isn't quite enough to expect full performance from the capsule. What is full performance, then? The result of underbiasing a condenser mic is that the maximum SPL [loudest sound] it can transduce without distortion is reduced. A microphone that claims to have a max. SPL of 120dB [at its rated 10V bias] can fall to 105dB when biased at only 1.5V, for example. In short, what a battery box does is make up this difference. In the case of the above microphone, its usable maximum SPL is increased by around 15dB simply by supplying the higher bias voltage. For those who are recording loud rock concerts, this means the difference between registering a clear, clean recording, and having a disc's worth of unlistenable, distorted garbage. <b>Battery boxes </b>[battboxes] <b>do not address the problem of MD and HiMD recorders' limited mic preamp headroom which leads to [preamp] clipping distortion when faced with high-level signals directly from a microphone.</b> In these cases, either attenuation is required between the mic and the preamp [not using a battbox], or between the battbox and the preamp [using a battbox], or you have to go directly into the line-in from the battbox [assuming the level from the mic is high enough above the noisefloor]. See Reactive's thoughts on the topic of attenuators here at <a href=”http://forums.minidisc.org/index.php?showtopic=9069”>this thread</a>. Also see <a href="http://forums.minidisc.org/index.php?showtopic=11254">Greenmachine's DIY instructions for a simple battbox</a> [scroll down past the DIY microphone post].</blockquote> <b>What's the practical difference between a battery module and an attenuator?</b><blockquote>First, realise that what an <a href="http://en.wikipedia.org/wiki/Attenuator">attenuator</a> does is <i>throw signal away</i>; it causes deliberate signal loss. Generally speaking, this is something engineers and purists will frown on doing unless it's absolutely necessary. The most often seen reason for using one is in the case of trying to capture a very loud rock concert with a microphone who maximum SPL is not being exceeded, but whose sensitivity is high enough that its output level overloads the preamp it's being plugged into. In this case, the attenuator reduces the output level of the microphone [it throws away signal] sufficiently that it no longer overloads the preamp, making a clean recording [of the loud parts] possible. I'm going to repeat myself somewhat here for those who skipped the last section: battery boxes [battboxes] do not address the problem of MD and HiMD recorders' limited mic preamp headroom which leads to [preamp] clipping distortion when faced with high-level signals directly from a microphone. In these cases, either attenuation is required between the mic and the preamp [not using a battbox], or between the battbox and the preamp [using a battbox], or you have to go directly into the line-in from the battbox [assuming the level from the mic is high enough above the noisefloor]. [see links at the end of the above section] Many users of this board use variable attenuators when recording loud concerts and report good results. I, myself, have never tried it and have also never had reason to, as the loudest sounds I've recorded [such as jets taxiing, artillery fire, helicopters passing directly overhead – all of those at an air show, I'll point out.. fireworks, and thunder from close-striking lightning] exceeded the max SPL of my microphones [powered by my recorder, not a battbox] in addition to clipping at the preamp. In other words, I can not speak from experience on this matter. I would like to point out one particular concern of mine, however: Putting an attenuator in line with your microphone causes a loss in the signal from the mic, and also should cause a corresponding loss in the bias voltage being supplied by the recorder. Since recording high SPLs cleanly is the entire point behind using the attenuator, this might with some equipment end up being an exercise in self-defeat; you're lowering the output of the mics so it doesn't overload the preamp, but you're also lowering the maximum SPL the mics can transduce without distortion. Whether this happens or not depends on exactly the mic, its sensitivity at the supplied bias voltage, &c. - and, judging by the experience of forum members such as A440, it doesn't happen with the most commonly-seen equipment. Point being: it's unlikely to happen judging by the experience of our forum's users, but it's not impossible, which is why I put it out there.</blockquote> <b>What's an external preamplifier for?</b><blockquote>An external preamp allows the recordist to completely avoid the pitfalls of microphone underbias, a possibly-noisy built-in preamp in their recorder, and the possibly-limited headroom of the same built-in preamp. Ideally, this is the best of all worlds. With the better models available [which feature controllable variable-gain inputs], you plug the external preamp's output into the line-in of your recorder, set it to unity gain [18/30 on HiMD recorders] or even just leave the unit in AGC mode, plug the mic into the preamp, and set the gain of the preamp [with a nice tactile turn-it-by-hand knob] so that the recorder's meters are happy. The advantages of using an external preamp with a variable-gain input can be summed up thus:<ul><li>The probable lower noisefloor of the external preamp makes for cleaner recordings of quiet sounds</li><li>The higher bias voltage it supplies to your microphone ensures that high SPLs can be recorded cleanly</li><li>The higher preamp headroom also ensures that loud sounds can be recorded cleanly without having to incur signal loss before the signal is even preamplified, let alone recorded [i.e. it avoids having to use an attenuator]</li></ul> The chief disadvantage to using an external preamp is that it means having another box to both carry and power, as well as more cables snaking about your body. All preamps are <i>not</i> created equal, and some designs actually just make things harder and worse to control [read: any model that uses fixed-gain, DIP switches to set the gain, and lacks any kind of feedback regarding its output levels] rather than either easier or better. This is another matter of personal preference as well depending on what kind of recording you do – a fixed-gain preamp might be perfect for certain situations, while being generally less versatile. I would personally recommend using an external preamp for things such as nature, ambient, environmental, or acoustic music recording – generally all things where detail and dynamic range are of importance from the get-go. The potential lower noisefloor, probable higher gain and headroom of the preamp will make the investment worth it; also, since many of these types of recordings are made rather deliberately, the inconvenience of the extra equipment and cabling is less likely to be bothersome.</blockquote> <hr noshade size=1> This document was written and formatted and (CC) by dex Otaku [Derek Gunnlaugson] for MDCF on 2006-05-11. The contents of this post, though not the entire thread containing it, are the intellectual property of dex Otaku. Redistribution or quotation are allowed under the rules of Creative Commons: attribution is manditory. Commercial re-use of any kind is prohibited without the express written consent of the author. Someone might want to sticky this, after reviewing it for errors.
  5. If I'm not totally mistaken, both use the same codec, so it should make no difference. Please correct me if I'm wrong.
  6. They used to call what you're talking about PortaStudios. In fact, they still do.
  7. This isn't news. This was posted about here quite a while ago.
  8. Okay. What is the actual source you're ripping? Is it an actual CD, is it from a disc image, or pre-split files of any give format [which SS will convert] that play aligned properly already? The most usual case [going by other users' reports here] is that SB will rip properly, but SS moves markers on its own under certain circumstances. [i've seen it happen on my own machine, but I can't remember what the conditions were at the moment.]
  9. There's basically a 99% chance this is because of your CD drive's read offset. See here. You can try turning on the "read with smoothing" option in SS's settings to see if it makes a difference. I doubt it will. If your drive has real problems because of its offset, then try using ripping software that does offset-correction such as Exact Audio Copy [EAC]. You have to use one of the CDs in its database to measure the offset first. If you don't have one of those CDs [a real one, not a burnt copy] then the results won't be accurate. You could also try using a different optical drive.
  10. Post a new thread in the software support forum, and please read and follow the forum guidelines for posting beforehand.
  11. I haven't tested this, but my assumption [which could be wrong] is that the conversion tool will behave the same as SS itself does; if you try to convert a track to the same as its original bitrate, with the "add copy protection" option turned off, it simply makes a new copy of the track, sans-DRM. You'll be able to tell if it's actually transcoding, too: SS "converting" using the above method flies through the tracks you've told it to convert. If it's transcoding, it will take [obvious] time to do so.
  12. So far as I knew, the encoder that SS uses by default is by FhG [the patent holders for MP3]. SS doesn't muck with it. Last I knew it is widely regarded as the best CBR encoder [yes, better than lame; lame is widely regarded as the best VBR encoder]. In any case, chances are that whatever you install on your system as a directshow/ACM encoder for MP3 will be what's used. For CBR encoding, the differences will be very minimal [if you can hear them with high bitrate encoding, I'd be very surprised] between FhG, lame, mp3lib, libmad, [both included/used by FFDshow] et al. Basically: the default has nothing wrong with it. The only time I'd worry about what the encoder is doing to your music is if you're using any form of the Xing codec [which, oddly, is what comes with all of Sony's professional software such as Sound Forge, Vegas, and Acid].
  13. This is normal behaviour for Sony's digital amps. If you're using low-efficiency [i.e. high power for low volume] output headphones, the amp will "ride" the signal to prevent outright clipping. In a normal analogue amp, the result would be soft-clipping. What's happening is exactly as you described: compression/limiting [same as how the AGC works]. Using any additive EQ worsens this problem, since a given band that had energy going up to or near 0dBfs -before- additively EQing it would already be approaching clipping before even getting to the output amp [almost all currently-released CDs are mastered with the average levels so high that this occurs through all loudish sections regardless]. Adding to that band that's already near clipping pushes it straight into clipping. The Sony EQ seems to pre-compensate for this by limiting the overall volume; this avoids outright distortion but gives your music that "pumping" or "riding the slider" kind of sound. I've noticed that the RH10 does this as part of the digital amp and as part of the EQ, separately. My NH700 does it with the EQ, but the [analogue] amp doesn't. What I've found with the digital amp is basically that if you have to use low-efficiency 'phones with the volume cranked, the amp and the EQ will -both- be compressing/limiting the entire signal. Basically, if you wish to avoid this problem, do the following: * Buy more efficient phones * Simply turn it down; if you're listening at 30/30 or near it with the stock 'phones, you should be damaging your hearing every time you listen * Never use the EQ above the centre 0-line
  14. Does the SS conversion tool work on encrypted/DRM'd atrac3plus files that are already in the library? I thought it was for converting MP3s and such only.
  15. I can't speak as to exactly what SS does, but what I expect from a cleanup function of basically any database is that it removes duplicate and dead entries. Any track info that is in the database without a corresponding file on disc should be removed. My observations thus far are that this does work. Subsequent re-adding of the same files to the library means re-reading tags from the files themselves, or in the case of WAV files, having to re-enter them manually. SS has a ways to come yet on file handling in general, but I find that most of the database's functionality is okay for straightforward uses like keeping a library of only a3/a3+ tracks. There are definite problems with duplicate handling, and there needs to be an easier way of removing dead entries, among other things. I was startled while using treesize a while back to see that my SS Db folder was 600MB in size .. from successively importing, transcoding, and removing MP3s and WAV files, the Db had bloated to 350+MB .. and SS keeps 3 copies [current and two previous versions] on your drive at all times. Once I ran the Db cleanup tool it shrunk down to around 1MB.
  16. Scroll back up and read my first reply in this thread. The key word there was "effectively," as in - even though there is DRM, it's very easily circumvented and/or gotten rid of altogether. Ergo, there is effectively no DRM.
  17. I know this likely won't make sense, but the reality is: SS still applies DRM to all tracks you upload from any recorder. You can remove the DRM easily post-upload, but DRM is still on the tracks during and after upload. Check-in/check-out has not been in effect for quite some time. There is effectively no DRM on uploaded tracks, in any case. The DRM applies only to the original uploaded copies within SS itself.
  18. Multiple answers to this question, with a simple preface: if you upgrade by installing 3.4 over 3.2, the files will be accessible as-is. As with all recommendations about upgrading, I'd still suggest using the SS backup tool BEFORE doing so, just in case. Answer #1 - transcode the files in SS to the same as their original bitrate, with the "copy protection" box UNchecked. This creates duplicates of the tracks as they are [in the "optimized files" folder] but without DRM. Answer #2 - export the tracks as WAV, back them up elsewhere in an open, non-DRM'd format, and re-import the tracks when you need them. In either case, there's no need to do this before upgrading as 3.4 will give you the same options you currently have.
  19. I wouldn't hold my breath if I were you.
  20. No one's going to make any comments about his ETA being 4/20? Heh.
  21. The issue is not one of what processor is in the computer. It's about: 1 - the hardware [HiMD recorder]; the Mac-compatible models [other than the RH1] are basically just the RH10 and RH910 with revised firmware [probably just as simple as a different USB device ID], and more importantly 2 - the software on the computer, which is made to interface with the recorders using Sony's proprietary DRM'd protocol. Technically speaking, all HiMD models could be made Mac-compatible simply by writing the needed software for OS X [firmware updates should not even be necessary]. Sony have decided, however, to limit what models do what and with which software. This makes them more money in the end, as even though the hardware is technically identical [it's the software, the firmware that differs], they can charge more money for it. Basically, the Mac software distiguishes between what it will and won't allow uploading from. Completely unnecessarily, I might add. It's a straight-up money grab in the case of last year's Mac-compatible models, which the RH1 changes.
  22. Cool. In my eyes at least, it would improve the otherwise already excellent quality of your reviews. Oh, and I'll point out: while I did bring this up in your thread, this is not intended as a personal attack of any kind. I have seen the same kind of response plots made from music from several users on this board as well as well as others.
  23. With a linear sweep [easily produced with free software and can be found anywhere online, I actually have test discs with things like this on me at nearly all times] you -should- be able to hear a 9dB drop between 1-10kHz quite plainly [unless you've been working in a machine shop all your life perhaps]. "Real world" doesn't apply to graphing in a case such as this [where the only thing being indicated is linearity of frequency response]. Graphing like this, from codec to codec, is almost entirely meaningless in terms of demonstrating artefacts, distortion, &c. because it only shows you the one thing you're measuring. Since the measurement in this case is the only thing of importance [showing a basic frequency response plot], a sweep would be far more meaningful than an aggregate graph of music. The music shows the same plot, true, but a sweep shows it plain as day - it's either flat, or it's not; it's either full-bandwidth, or it's not. You're not trying to measure anything else, so why bother pretending that there's some difference between a sweep and music, other than the fact that music is harder to read and provides far less useful information? I don't dispute the otherwise excellence of the review or your abilities. I just opine that doing it in way that follows a well-established convention that's been used probably since about the time that analogue tape was invented, which is perfectly clear in its presentation [using music is not], and requires no additional effort aside from playing a test sweep instead of music, makes much more sense. Quite simply: a sweep plot is easy to read and shows everything that needs to be shown in a case such as this [where it's the appropriate thing to use]. It's the right tool for the job, and it's far more informative about what you're trying to find out. Correction: With a linear sweep, comparing directly [not even ABXing because it's so obvious] between a source that is flat and a source that drops 9dB between 1-10kHz, the difference should be plain as day to most people's ears.
  24. Great review. I still wonder though, why, when people do frequency response tests, they do not use sweeptones [which would show the response in a linear fashion which is very easy to simply glance at and instantly understand for what it is].
  • Create New...