Wednesday, December 15, 2010

DJs' darkest hour: Indestructible eventually meant irrelevant for Technics turntable


TOKYO —
The Technics SL-1200 turntable is to dance music what the Fender Stratocaster is to rock: the key to a musical and cultural explosion. There’s even a pair of the iconic decks on display at the Science Museum in London, where they’re hailed as an invention that shaped the world.

It’s weird to think that hip-hop, house and techno wouldn’t exist if it weren’t for some Japanese engineers on a quest to build a hi-fi for the audiophile market. But the uncluttered Technics SL-1200, with its high-torque electromagnetic motor, pitch adjust, precision tone arm and ability to take whatever abuse was thrown at it, was form and function in harmony. From its 1972 debut on, the SL-1200—particularly in its MK2 incarnation—became the bedrock of sound systems at the world’s best clubs. It served as a platform for people like Grandmaster Flash to discover how to scratch records, and folks like Larry Levan to beat-match them.

When CDJs established themselves in clubland in the mid-’90s, rumors began to circulate about the imminent demise of Technics turntables. It would end up taking a lot longer than that, but after 38 years and 3.5 million units sold, the day that vinyl DJs and turntablists always feared has finally come: the SL-1200 range is going the way of the dodo.

In a sad irony, the sheer durability of the decks has meant that they’ve outlived the companies that manufacture their otherwise redundant parts. Manufacturer Panasonic cited the increasing difficulty of sourcing analog components—the same ones it had been using since the beginning, in order not to compromise on quality—and a 90% drop in sales over the last decade as reasons for discontinuing its iconic product.

If the Technics SL-1200s and their accompanying 10kg record bags are the Sony Walkman and cassette box, then the laptop and its Traktor DJ software is the iPod. The latter is smaller, lighter, more convenient… but does it really sound better? And more importantly, has it got soul?

That’s a question which will soon have to be answered. Jeff Mills will be deftly mixing three decks at once for a few years yet, but the next generation of DJs won’t be starting out on turntables. In the hyper-competitive world of DMC scratch contests, the DJ Kentaro-style champion of the future will be cutting up a selection of digital WAV files on a virtual “scratch pad.”

Ultimately, Technics turntables were a victim of two things: the digital revolution and their own success. They’re so good that if you buy a pair, you’ll never need to replace them—and therein lies the problem.

Thursday, December 2, 2010

The ‘sound sculptor’ that could win the Turner Prize


A famous sculptor is filling my living room with her priceless art. Only she’s not actually here. She’s on speakerphone from her home in Berlin, singing a dreamy 16th-century Scottish folk song in a fey, untrained voice. It’s the same voice, same song, that is commanding gallery space in the Tate Britain and become the toast of the international art world. Ah, the strange wonders of Skype and Susan Philipsz. 

The 44-year-old Scottish “sound sculptor” is currently the odds-on favourite to win the 2010 Turner Prize, to be announced next week.

She was also the show’s most controversial nominee, given that her work – which consists of songs sung plainly by the artist, recorded and looped and played in carefully selected locations, from underpasses in Ljubljana, Slovenia, to the rotunda of the Guggenheim Museum in New York – is completely intangible.


But don’t mistake her for a musician, or even a “sound artist” in the tradition of John Cage. Philipsz is more interested in perceptions of architecture and space – both internal and external – than in key changes or composition. Her work is less musical than it is performative and experiential.

In that sense, it is not dissimilar to the works of Janet Cardiff and George Bures Miller, Canadian artists whose recent works include The Cabinet of Curiousness – which people can “play” by opening its drawers, releasing different sounds. Philipsz also calls to mind the ambient-noise projects of American artist Bill Fontana, who once outfitted Big Ben’s tower with microphones and sensors.
Philipsz’s work, however, is much simpler. “I’m obviously not a trained singer; and I don’t do anything to make my songs sound any better, as you normally would in a studio,” she says.

Sitting in a cavernous white space at the Tate this week, listening to the artist sing an old sailor’s lament, I was inexplicably moved. Experiencing her work is a bit like being on the street when someone drives by blaring a song you once loved – banal and yet magically familiar.

One of her earlier installations took place at a big-box outlet, Tesco, in Manchester. Philipsz sang songs live over the loudspeaker normally used to announce specials on frozen TV dinners. As shoppers shopped, she watched from a glassed-in upstairs office and, once in a while, pressed the intercom button and sang a rendition of Radiohead’s Airbag or the Rolling Stones’ As Tears Go By.

“Suddenly there would be this drop in the ambient noise – which is much louder than you think in a grocery store – and people would stop what they were doing and look around, bewildered,” she remembers. “Some would giggle and talk; others were silent and seemed quite moved.”

The Tate exhibit is a distillation of Philipsz’s work performed under a famous bridge in Glasgow (not far from her hometown of Dundee) and she is currently exhibiting at locations throughout London’s business district, including the Tower of London and the banks of the River Thames; and in a public garden in Lisbon. She has been commissioned to do an outdoor sound piece this winter for Colorado’s Aspen Museum of Art, which will feature one of her songs played on a mountain ridge over a valley (“They’re going to give me skiing lessons!”) and at the Museum of Contemporary Art in Chicago.

It’s an impressive schedule for an artist who has spent most of her professional life believing her work would never find a wider market. “I spent my whole life thinking I’ll always be poor, I’ll never sell a thing, and suddenly everything changed,” she says.

So what explains the widespread appeal of this ephemeral new art form? The answer might just be hardwired into our brains. As McGill University professor Daniel J. Levitin, author of This Is Your Brain on Music and the just-published The World in Six Songs, argues in his new book, “music is not simply a distraction or a pastime but a core element of our identity as a species, an activity that paved the way for far more complex behaviours.”

Perhaps in Philipsz’s ethereal a cappella melodies we recognize strains of our ancestors singing what Levitan defines as “knowledge songs” – the songs we use to teach our children, or outsiders, how our culture works. If you think this practice sounds outdated, just remember how you learned the alphabet.
Levitin also writes about how music changes the way we behave – witness the now-common police practice of playing classical music in seedy locales to drive away the riff-raff. Philipsz, who’s work has regularly been exhibited under bridges where the homeless sleep and junkies use, hopes her work makes these environments more – rather than less – welcoming for denizens of the street.

She remembers one of the most memorable moments of the exhibit under the bridge over the River Clyde in Glasgow. “An old man holding a bottle came up to me and said, ‘Can you hear those voices, or is it just me?’ ” A few feet away, one of his mates was crying.

The mind on music

(Original Link - http://www.chicagotribune.com/health/sc-health-1201-music-20101201,0,3647950.story)


On her last night at the hospital after undergoing a series of spine surgeries, Susan Mandel lay in bed listening to Pachelbel's Canon in D.

For days, Mandel's positive attitude had kept any anxiety at bay, so she was surprised when she noticed her face was wet, and then her pillow, which slowly soaked through. She sobbed silently, listening to the familiar violins, until the tears stopped coming. Then she felt peace.
"It wasn't a cry of anguish, it was a cry of relief," Mandel said, recalling the night more than 20 years ago. "It's very tender, evocative music, and I think it gave me permission to release the pent-up emotions."

Philosophers for millenniums have marveled at the power of music to speak to our souls, to inspire joy, melancholy, aggression or calm with visceral insight beyond the grasp of our rational minds. Thanks to advances in neuroscience, researchers are beginning to understand what it is about music that touches us so deeply, and how to harness that power to soothe, uplift, comfort and heal — to use music as medicine for emotional and physical health.

Mandel, a music therapist and research consultant at Lake Health Wellness Institute in Cleveland, this month released "Manage Your Stress and Pain Through Music," (Berklee Press Publications, $29.99), with co-author Suzanne Hanser, chairwoman of the music therapy department at Berklee College of Music in Boston. The book explains how to choose and use music to cope with challenges in your life.

Not what you'd guess

It can seem obvious which songs would bring you up and which might bring you down. And indeed, there are structural components to songs that are meant to communicate joy, such as a fast tempo in major mode, or sadness, such as a slower tempo in minor mode. But there's a difference between the emotion communicated through music and the emotion actually induced in the listener. Our memories, personal preferences and mood at the time can have a heavier influence than the intent of the musical structure in how music makes us feel.

"You could have a really positive emotional experience with a song that structurally communicates sadness," said Meagan Curtis, assistant professor of psychology at State University of New York at Purchase, who does research in music psychology.

What matters most in reaping the health benefits of music, from pain reduction to stress relief, is that you listen to music you enjoy, research shows. In a study on cardiac rehabilitation patients, Mandel found that the patients who liked a therapeutic music CD she put together experienced a reduction in blood pressure and reported feeling calmer, while patients who didn't like the music actually felt worse.

While there are structural components that convey soothing, such as consonant harmonies and a narrow pitch range, whatever music has the most positive associations to the individual will have the most positive emotional and physiological response. It activates the parasympathetic nervous system, which calms heart rate, lowers blood pressure and relaxes muscles.

"I have found people who love punk rock and find that it helps them to sleep," Hanser said. "It's likely that they have learned it truly speaks to them and expresses a part of who they are."

Music and pain

Music also has been found to help people tolerate pain longer and make the pain less painful.
Studies using a cold pressor task, which simulates chronic pain by submerging subjects' hands in a bucket of freezing cold water, found that people were able to leave their hands in the water longer when they were listening to music they enjoyed, Curtis said.

That could be because people take comfort in the familiar, or because it distracts them. Between recalling memories, tapping our fingers, conjuring up images and other tasks, our brain releases so many chemicals to process music that they interfere with our perception of pain.

How the brain processes

There's some evidence that we feel music viscerally because it goes straight to the amygdala, the part of the limbic system that manages our emotions, and the hippocampus, where long-term memories are stored, Hanser said.

Music that gives people chills or shivers up the spine has been found to activate the same reward areas of the brain stimulated by food, sex and certain types of recreational drugs, Curtis said. While different people get chills from different songs, often those shiver-producing songs have an unexpected tonal structure, like a chord that isn't part of the harmonic progression, she said.

Impact of lyrics

While structure is less important than personal experience in a song's ability to induce emotion, lyrics may be even less important than structure, Curtis said. We don't need to consciously attend to structure to process its emotion, but we do have to pay attention to lyrics, making the impact of structure stronger and less difficult to process.

People are usually very intuitive about what songs are useful to them and often choose music appropriate for the state they're in, Curtis said. That explains one of the great ironies of human behavior: that many people like to listen to sad music when they're sad.

We might like the affirmation, as we create a bond with the singer or composer because they, too, have felt what we feel, Curtis said. Another theory is that wallowing is a kind of emotional catharsis, helping us fully experience the sadness so that we go through the stages of grief more quickly.
And it can be a healthy thing. A central tenet of music therapy is to meet people where they are, called the ISO principal. So if people are very depressed and lonely, you would start them with music that matches their mood before introducing something more uplifting.

"You first affirm and allow the person to reflect, and then move on to more positive things and hopeful outlooks," Hanser said.

Some researchers hope to nail down the precise combination of pitch, tone, tempo, rhythm, timbre, melody and lyrics that makes a piece of music ideal for regulating people's moods or helping to reduce pain. A study under way at Glasgow Caledonian University aims to develop a "comprehensive mathematic model" that identifies how music communicates emotions, which eventually could help doctors prescribe music.

Hanser is skeptical that a sweeping formula exists, and if it does, "I hope we don't find it," she said. "I don't know anyone who is the mean, the normal. If we can recognize our own unique characteristics and what makes us each respond so differently, that I think is really fascinating and what humanity is all about."

aelejalderuiz@tribune.com

Emotional impact
While a person's emotional reaction to a song is based largely on his or her history with the song, the song's structure also can communicate emotions, mostly through mode (major or minor chords) and tempo, said Meagan Curtis, assistant professor of psychology at State University of New York at Purchase.

A fast tempo (up to 120 beats per minute) tends to heighten physiological arousal, while slower tempos (down to 60 beats per minute) tend to reduce arousal. Major chords tend to evoke positive emotions, such as joy and contentment, and minor chords negative emotions, like fear, anger or sadness.
Curtis offered some examples:

•Major mode, fast tempo Example: "Shiny Happy People," by R.E.M. Emotion conveyed: happy.

•Major mode, slow tempo Example: "Sitting on the Dock of the Bay," by Otis Redding. Emotion conveyed: soothing, tenderness.

•Minor mode, fast tempo Example: "Smells Like Teen Spirit," by Nirvana. Emotion conveyed: angst, anger.

•Minor mode, slow tempo Example: "Eleanor Rigby," by the Beatles. Emotion conveyed: sadness.

Road Trip Through Florda....Anyone wanna party!?


Whats up everyone. I am making my first trip back to the states in 4 years...and even then was just Hawaii. Sushi & JPop has been getting under my skin. I am road tripping from Dec 14-Dec 21 from Orlando to Miami. Anyone who is from that area...and wants to get a session on or knows a good party place...please hit me up! Ill have a car, and ready to get my dancin shoes on!!! After that, its off to North Carolina, Pittsburgh...back to Osaka...then to the Philippines a couple days later.

Peace!

FroBot

Thursday, November 25, 2010

Ten Music Technologies to Be Thankful For Right Now

(Original Link - http://createdigitalmusic.com/2010/11/ten-music-technologies-to-be-thankful-for-right-now/)


Happy Thanksgiving to our American readers. I was thinking about technologies for which I’m particularly thankful, some non-obvious, some perhaps so obvious they might be easily be taken for granted. Each I hope represents some opportunities for others. At the risk of starting a Thanksgiving roast, in no particular order, here are the ones foremost in my mind in the waning days of 2010.



1. MIDI: MIDI gets kicked around a bit – it’s not a perfect protocol, commonly-used messages are low resolution, and the parts most people use really haven’t changed since the mid-80s. But don’t discount why we use it so much: it’s ubiquitous, cheap, and lightweight. Want something simple that works over WiFi and Bluetooth? Want to connect something from 1986 you found on eBay to your iPad and then use on a DIY synth with a $3 microcontroller? Want to connect an Xbox keytar without any hacking? MIDI may not be the right tool for every job, but as a lingua franca, it sure is darned useful. midi.org

2. Linux: Linux can still sometimes exhibit a punishing learning curve, and proprietary drivers for devices like video cards can cause issues. But in a world of wildly diverse hardware and painfully-quick obsolescence, Linux is a lifesaver. It can resurrect old machines, make netbooks usable, and the Linux kernel is fast becoming the solution for embedded gear from Android-powered devices to DIY projects. For music, that means an OS that can run on anything, and quickly wind up making noise with tools from Pd and Csound to Renoise and DJ app Mixxx. Suddenly, anything that runs on electricity and has a processor looks like fair game. linuxaudio.org

3. Music notation: Fun toys aside, what’s the real killer app in 2010? It might be the score. It’s still the fastest way to communicate a musical idea to someone else, or quickly play the Billy Joel tune your cousin wanted to sing along with. (Best karaoke machine in the world: your brain.) And this year, we saw improved ways to enter those scores, from ever-more-mature commercial packages to free tools like Lilypad. An iPad can be a fake book full of lead sheets; a browser can turn some quickly-typed notes into notation. All this using something that wouldn’t look entirely unfamiliar to someone who stepped through a wormhole from a few centuries ago.

4. Reaper: We face a challenge in music technology: we’ve actually got too many great options. So it’s a good thing that there’s at least one DAW that’s easy to recommend that you know people can afford, with pricing ranging from $40-150. Reaper runs on Mac, Windows, and (with WINE) Linux. It’s not bloated with features, has no DRM, is heavily extensible (with both custom plug-ins and scriptable MIDI). And if you’re trying to get a friend to try a DAW without (cough) pirating it, you can point them to Reaper’s free trial version. Add to that the fact that you can author Rock Band songs for the game platform – including full keyboard and guitar transcriptions in the near future with Rock Band 3 – and Reaper is a DAW worth keeping around. reaper.fm

5. Four-lettered Synth Makers That Remember the Past: Not one but two famous names from synths yesteryear, MOOG and KORG, have been on fire in 2010. Moog celebrated its Minimoog anniversary with an enormous XL edition. Practical? Not terribly. Something boys and girls could pin up to their walls? Yes. And Moog also had a bigger-than-ever Moogfest, proving its synths and effects weren’t just the domain of electronic music geeks, plus an affordable iPhone/iPod touch app that turns those handhelds into portable machines capable of recording anything and adding far-out effects. KORG, for their part, proves a big music tech name can remember their past, too, with the soul of their MS-20 appearing in iPad apps, wonderful, stocking stuffer-friendly hardware (Monotron), new bundles of software emulation (for those who prefer “real computers” to iPads), and, heck, even retro t-shirts. What these two companies have in common: understanding that their legacy matters to people, and finding ways to get that legacy in front of as large an audience as possible. Those are both ideas I hope catch on. korg.com, moogmusic.com

6. Portable Recorders: Then: Marantz, Nagra, Tascam Portastudio. Today: go-anywhere field recorders from Tascam, Zoom, Roland, Korg, and many others. The ability to go out and actually record stuff remains one of the most essential needs in music tech. Today’s devices add nifty extras like pitch-independent tempo adjustment and built-in metronomes, making them as much a friend to musicians as they are sound designers. Odds are, if you’re reading this, some portable audio recorder is one of your most valuable possessions. Tascam DR-03 @ CDM

7. Pd: Pure Data, the open-source offspring of Max/MSP creator Miller Puckette and contributors around the world, is a free graphical patching tool that runs everywhere. You can use it on ancient iPods, or – via libpd – on bleeding-edge Android and iOS handhelds, in addition to (of course) desktop computers. It’s been incorporated in free and open source projects, and commercial and proprietary projects alike. Thanks to terrific free documentation and sample patches, you can also use it as a window into learning, with the aid of being able to see signal flow visually. (Even Max gurus can pick up tips for that environment with some of the online help.) The beauty of Pd – as with a number of tools – is that sometimes just making what you need is easier than making something someone else made do what you need. puredata.info, pd-everywhere @ noisepages

8. Bandcamp: The Web is littered with services catering to artists – not least being the chaotic mess that is the remains of MySpace. Bandcamp, in contrast, is simple, efficient, and functional, and for many of us has been a place to acquire music direct from artists as well as to publish it – no complicated jukebox/storefront middlemen needed. Some of my favorite listening this year came from Bandcamp. bandcamp.com

9. Contact mics: A few dollars in parts and a soldering iron will make you a perfectly-functional device you can use to explore sound. Or, you can splurge on high-end devices. Either way, the surest antidote to endless choice in software synthesis or enormous sample banks is to go out and get a little closer to sonic vibrations. brokenpants DIY contact mic tutorial

10. The Internet: Distraction. Time suck. Scourge to privacy. A funny thing happened on the way to the Internet: you may have found a group of people who inspired you to make more, and share more, helped you solve problems and get back to music. On Twitter, on Facebook, on forums, on, yes, our fledgling Noisepages, everywhere I go, I find people who help me get tech working for me and remind me why I love music. So… thanks. Maybe there’s hope for us after all. (see… The Internet)
That’s my list. What are you thankful for? Let us know in comments.

Both musicians and non-musicians can perceive bitonality

(Original Link - http://scienceblogs.com/cognitivedaily/2010/01/bitonality.php?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+scienceblogs%2Fcognitivedaily+%28Cognitive+Daily%29&utm_content=Google+Reader)


Take a listen to this brief audio clip of "Unforgettable." (original link)

Aside from the fact that it's a computer-generated MIDI performance, do you hear anything unusual?
If you're a non-musician like me, you might not have noticed anything. It sounds basically like the familiar song, even though the synthesized sax isn't nearly as pleasing as the familiar Nat King Cole version of the song. But most trained musicians can't listen to a song like this without cringing. Why? Because the music has been made "bitonal" by moving the accompanying piano part up two semitones (a semitone is the difference between a "natural" note and a sharp or flat). Here's the original, unaltered piece:

Can you tell the difference? A 2000 study led by R.S. Wolpert found that non-musicians couldn't distinguish between monotonal and bitonal music played side-by-side. Meanwhile musicians found artificially-created bitonal music to be almost unlistenable. For most non-musicians, if they heard anything wrong with the clips, they typically said they were being played too fast, or mentioned some other unrelated concept.

But Mayumi Hamamoto, Mauro Bothelo, and Margaret Munger (AKA Greta) wondered if years of musical training were really necessary for non-musicians to hear bitonal music. Bitonality is actually a bit controversial in the world of music, and it can be a little hard to define. In principle, there's a difference between bitonality and just playing or singing off-key, but in practice, the difference may not even exist. Advocates of bitonality like to point to the works of composers like Milhaud, Bartók, Prokofiev, and Strauss. These composers deliberately wrote in two different musical keys. But how is that different from occasionally or regularly writing dissonant chords? After all, all the same notes can be written using any musical key. To be truly bitonal, advocates say the two separate parts must unfold independently in different keys. This results in a distinctive "crunch" when the music is played. The separate question is, is this noticeable? Wolpert's work shows that it is, at least for trained musicians.
Hamamoto's team replicated Wolpert's study by playing altered and original clips of familiar songs like the above example to three groups of undergraduates: "Musicians" with more than 5 years of training, "Amateur Musicians" with 1 to 5 years of training, and "Non-Musicians" with less than a year of training. There were 14 students in each group. Musicians were significantly better at noticing that the modified clips were bitonal or "out of tune."

Next, everyone was given brief training session, where instead of modifying monotonal music to be bitonal, some of Milhaud's music originally intended to be bitonal was modified to be monotonal. Here's an example bitonal piece (Milhaud's "Botafogo"):

After hearing the clip and seeing it identified as bitonal, the students were told

Notice sometimes there is a "crunch" in the sound. This should sound somewhat unpleasant and feel like it shouldn't be that way.
Then they listened to a manipulated version of the same clip:

Again, they were told this clip was monotonal and directed to notice how the sound seems smoother and more pleasant (to my mind, it's not nearly as interesting as the original -- but that wasn't part of the study). Next they were trained with feedback, listening and identifying clips until they could accurately label four in a row. This took just a few minutes.

Finally, the respondents were tested on four new clips, all songs by Milhaud. This graph shows the results:
hamamoto.gif
As you can see, for all the songs except "Ipanema," the students were quite accurate at identifying both bitonal and monotonal songs (error bars are 95 percent confidence intervals). More important, however, was that there was no significant difference in the results for Musicians, Amateur Musicians, and Non-Musicians. All three groups fared equally well.

The authors conclude the identifying bitonal music isn't a matter of years of musical instruction; it can be achieved with just a brief training session. In fact, the Non-Musicians took no longer than Musicians to complete the training session, so years of experience don't even help with learning about bitonality.

It also may suggest that the controversy about whether bitonality actually exists may not be warranted. If nearly everyone can hear the difference, then it's probably a genuine musical phenomenon.

Monday, November 22, 2010

Junee train becomes a sound lab


MUSIC: Rolling Stock. Various sound artists and composer-performers. Wired Lab. In and around Junee, NSW, November 19. 
 
IT was noon on Saturday. Just over 200 people, a motley crew of local families and sound art aficionados from the city, were gathered at the Junee railway station. This was the third event that the irrepressible Sarah Last and her Wired Lab team have organised with the people of Junee: the one-day public art event featured 15 artists on a train, the culmination of a series of creative residencies in regional NSW.

Trains and everything associated with them are a religion at Junee, a wheatbelt town of about 4000 people, 444km southwest of Sydney. Its temple is the Junee Roundhouse, a transport museum with 42 tracks and dozens of old trains and carriages.

At the epicentre of the museum, a 33m turntable cranked into life as Dave Noyze and Garry Bradbury captured its industrial clangour with 15 microphones. Young men from the Australian Parkour Association leapt around the roofs of carriages. Outside, Joel Stern and Andrew McClennan created a gamelan sound tapestry from the rusting detritus of trains.

Then we boarded the train, its eleven carriages containing various sound events and theatre. Experimental films played in the sleeper compartments. One darkened carriage was festooned with LED glowsticks, creating a flicker-homage to Brion Gysin.

It was a happy mix of sound artist chic and local jollity that carried the train to its destination in Cootamundra, three hours away. There was a 40-minute pit stop at Cootamundra Station, where a Kenny Rogers imposter conducted a cheesy quiz on the platform. Then it was back to the train for the return journey, and the most accomplished sounds of the day. British sound-gatherer Chris Watson, noted as sound recordist for David Attenborough's television documentaries, has recreated a train journey through northern Mexico, its running commentary and sound tapestry blending perfectly with the clatter of our own train.

A bus trip (sacrilege!) took us to the celebrated Junee Licorice and Chocolate Factory where a rockabilly band, the Pat Capocci Combo, played into the night. "Much more fun than the RSL," a local woman says. "I'll be back for more of this, anytime."

Start-Up Company Music Mastermind Introduces Unique Music Creation Technology 'SoundBetter'


Calabasas, CA – November 18, 2010 – Music Mastermind, an independent music entertainment and technology company, revealed today details of SoundBetter, a cloud-based technology that lets anyone create studio-quality music. SoundBetter joins a robust, growing patent and trademark portfolio held by Music Mastermind (MMM).

The company's SoundBetter solution simplifies music making by automating typically complex digital audio workstation processes, thereby allowing anyone to instantly become a recording artist. SoundBetter provides a complete creative solution that lets users enhance their voices, transform their voices into instruments, create pro-sounding beats, add studio-quality backing tracks, and even generate and add adaptable licks to collaborate with friends and famous artists. The technology's entertaining, game-like elements utilize simple visual cues to make the creative process fun and accessible to all. SoundBetter produces truly individualized music that can be shared and discovered across a broad array of social networks.

"We're at the forefront of the next evolution of music entertainment, and it's time to break down the barriers that prevent people from expressing themselves musically," said Matt Serletic, CEO of Music Mastermind. "This company is all about fun and easy music creation for everyone. All people love music, and now absolutely anyone can produce great sounding songs to enjoy and share with the world."

Serletic, a multi-Grammy Award-winning producer and former Virgin Records Chairman and CEO, founded Music Mastermind in 2007 with his partner, Bo Bazylevsky, a veteran Wall Street bond trader, senior hedge fund portfolio manager, and former Global Head of Emerging Markets Corporate Trading at J.P. Morgan. Together, they compiled a world-class development team with a full-time staff of more than 30 professionals from multiple disciplines, including entertainment, sound engineering, music theory, technology, finance and gaming. Led by Chief Technology Officer Reza Rassool, whose work has garnered a Technical Oscar and Emmy, MMM's engineering and design teams have over 190 years of cumulative experience. Members of the team have advanced degrees in computer science and music, as well as 33 console game credits, including multiple Guitar Hero and Tony Hawk titles.

The company successfully raised its first round of funding in February 2010 with nearly $5 million from angel investors, and is currently in the process of closing its second investment round.

"Media creation and consumption are at an all-time high, and our technology will do for music what YouTube did for video," said Bo Bazylevsky, President and COO of Music Mastermind. "We want to put the power of real, true creation into everyone's hands, and we're confident that our products will do just that. The tech is wrapped in such a fun interface that you don't even realize that you're working to produce music!"

The company plans to implement this unique music creation technology across numerous mediums; the announcement for MMM's new creation platform will be revealed in the coming weeks.

Music Mastermind simplifies the traditionally complex world of professional-quality music creation, allowing anybody with a creative idea to be heard. For more information about the company and its patented music creation technologies, please visit www.musicmastermind.com, or follow us on Facebook and Twitter.

About Music Mastermind, Inc.
Based in Calabasas, CA, Music Mastermind was founded by Grammy Award-winning producer/songwriter Matt Serletic and top Wall Street bond trader Bo Bazylevsky. Formed in 2007, the venture-backed start-up is dedicated to developing technologies that break down the barriers to music creation. For more information please visit www.musicmastermind.com.

Music turned into light, and fired at you


(Original Link - http://www.thestar.com/entertainment/music/article/890275--music-turned-into-light-and-fired-at-you)


When Richie Hawtin wanted to create synesthetic visuals triggered by the music he’s playing live as Plastikman, he turned to his old pals at Toronto software house Derivative.

Derivative’s TouchDesigner helped propel this year’s Plastikman tour to that mythical “next level” by providing an interface with the performance-friendly electronic-music software Ableton Live that allowed the component parts of Hawtin’s skeletal techno tracks to produce images that moved and changed shape in direct response to the sounds he’s generating onstage. In 3-D, no less.
Heaven only knows how one actually brings something like that to fruition, but TouchDesigner — which will respond to pretty much any input you desire, from sound to light to touch and beyond — is the relatively young outgrowth of designer Greg Hermanovic’s longtime desire to use computers to produce “interactive, real-time art.” He’d been dreaming of it since he put his first pixel up on a computer screen while working on a U.N. research ship in Africa during the ‘70s, came a little closer to realizing his dream doing special-effects software with successful local CGI-enabler Side Effects — whose Houdini product has since been used in more than 400 feature films — and has gotten as close as he’s yet come to his perfect vision of a worldwide, collaborative art-sharing platform that’s “self-perpetuating and a bit out of control in its own way” and that could be used as a universal education and research tool since launching TouchDesigner eight years ago.

TouchDesigner has made stunning “live” artworks possible everywhere from M.I.T. to the world’s largest yacht, but Hermanovic — whose software’s patch-and-collage aesthetic is inspired in part by his love of old modular synthesizers and their many dangling cables — has also become something of a go-to guy for electronic musicians looking for a visual component to their shows. Swayzak enlisted Derivative, for instance, to jazz up its recent DJ gig at 99 Sudbury, while when the Star spoke to Hermanovic this past Friday he was just returning from a little last-minute tweaking with DJ Shadow’s crew at the Phoenix.

Q: So was Hawtin running some custom stuff for those Plastikman live gigs?

A: It was, but everything that Rich is doing you can do with the free version that’s on our Web site. It’s custom because we added more stuff to it, but anybody could have done it. That’s the nice thing about TouchDesigner is anybody can use TouchDesigner to make anything they see other people making.

Q: In this case, Ableton Live was being used to generate the visuals, right?

A: He’s sending this stream of data into TouchDesigner, which is running live on another computer. So we’re just taking all this looping data and this controller data, and every song we have mapped differently to a visual. So TouchDesigner takes his inputs and for every song we know what the visual is going to be so we display it out on the LED screens. He’s kind of building music tracks as he goes and we’re working with him and a visual designer going ‘Okay, part of that sound goes with this visual element and this knob goes with that thing, and then when the song progresses it will increase the brightness of this and the size of that.’ So it’s Rich and us working side-by-side so you end up with a look and a theme for a song.

Q: Why design a tool for making interactive art?

A: I’m a big fan of experimental films — I’m a huge fan of Norman Mclaren — and I wanted to reproduce some of these experimental-film effects using software, so that’s why I got into computer graphics: so I could do special effects. I wanted to do what a musician does — perform live, tweak things — and do that visually, but I couldn’t do that with special-effects software. In the ’90s, you couldn’t do real-time computer graphics. Well, you could, but it was on computers that cost $200,000.

Q: What’s your ultimate goal for TouchDesigner? Having it react directly to electronic signals from people’s brains?

A: When I see researchers who are doing high-end chemistry research or something using a component made by a 10-year-old in his basement, not knowing where it came from, that’s when I’ll be aware that we’ve kind of closed the loop: when kids are making parts of bigger systems for high-end researchers or professionals. It’s gonna happen.

Music student does gig hours after rescuing teen in Colchester


(Original Link - http://www.gazette-news.co.uk/news/8677399.Music_student_does_gig_hours_after_rescuing_teen/?ref=rss)


A MUSIC student who rescued a girl from a burning house returned to college hours later to play his first gig.

Jamie Cunliffe, 21, of Magnolia Drive, Colchester, came to the aid of Naomi Hare, who was trapped as fire ripped through her home in James Wick Court, Balkerne Hill.

Naomi, 17, a student at Colchester Institute, was asleep when the blaze started in her sister’s bedroom.
She was woken by a smoke alarm but was unable to get down the stairs due to the smoke.

Jamie, who is studying music at the same Sheepen Road college, said: “I was walking back to my mate’s house and saw a girl at the window screaming and screaming.

“I went to kick the door in, but it turned out it was open.

“I got to the top floor, where I could hear her screaming. It was really scary and the smoke was so black, and it was so hot.

“She was having a panic attack and wouldn’t move, so I picked her up and carried her out.
“We got outside, but then there were small explosions coming from the house, so I picked her up again and carried her around the corner.”

Neighbours came to the pair’s aid.

Jamie and Naomi were treated by paramedics for smoke inhalation, before being taken to Colchester General Hospital for further treatment.

Jamie added: “I was treated at hospital, but wanted to leave because I was desperate to get to a gig.
“I’m in a band called the Elements, and that night was our first gig.

“I'm the frontman in the band, and we have been practising really hard – I couldn’t not go to it.”

Naomi’s mum, Annette Kelly, phoned Jamie to thank him for saving her daughter’s life.

Fire crews, who are investigating the cause of the fire, which started on Tuesday at about 1.25pm, also praised Jamie for rescuing the girl.

He added: “People have been saying I’m a hero, but I’m not looking for praise. It was just a natural reaction. I couldn’t just stand there and watch.”

Fears for future of school music lessons


(Original Link - http://www.bbc.co.uk/news/education-11796636)

School music lessons could be hit as local councils make savings and school budgets are redrawn, it is feared.

One in five music services, which support schools, expect councils will completely axe their grants and half fear cuts of up to 50%, a survey suggests.
The Federation of Music Services warned that some services which help provide subsidised lessons could collapse.

The government said all pupils should be able to learn an instrument or sing.
It has commissioned a review of music provision in schools, being carried out by Classic FM head Darren Henley, but this is not due to report until January.

However, local authorities in England which face cuts of about a third, get their funding allocations in early December.

It is clear from the federation's survey of 158 music services in England, Wales and Northern Ireland, that many are already planning cuts with some preparing to axe the funding completely.
Local authorities provide just one strand of funding for school music services, with the rest coming from central government grants and parental contributions.

But the expected cuts come as schools face a huge shake-up of their budgets. A number of schemes dedicated to supporting school music face cuts or being channelled into a general schools budget for redistribution.

The Department for Education later said it had not yet taken a decision on the main £82.5m Music Standards Grant and would not do so until the Henley review had reported.
But it would not guarantee that the money would be ring-fenced within schools.

'Steep decline'
 
Federation of Music Services (FMS) chief executive Virginia Haworth-Galt said: "We recognise the pressure many local authorities are under but would urge them to them to hold back their plans until we know the results of the Henley Review.

"Music and our children's education are too important to be jettisoned like this particularly when we know that 91% of the public back music education in schools."

She added that the FMS would be very disappointed if the music grant went directly into schools' budgets without any ring-fencing for music education.

"This situation occurred in the early 1990s with disastrous results; music went into a steep decline as the monies were spent elsewhere in schools. This is a music lesson that should not be repeated," she added.

Conductor of the Bedforshire Youth Orchestra Michael Rose says music services in his area, Central Bedfordshire, are set to have budgets and teaching staff cut to zero.

He said as music services were non-statutory they were particularly vulnerable in the present climate of cuts.

He said: "If funding is lost in this way music lessons will become the sole preserve of the middle classes."

He added: "Instrumental teaching in the county's schools is provided by a central staff of highly qualified instrumental teachers. It has resulted in literally many thousands of children having the experience of learning an instrument."

Schools minister Nick Gibb said too many children in state schools were denied the opportunity to learn to play a musical instrument.

This was why he had launched a major review of how music is taught and enjoyed in schools to help make sure all pupils get an opportunity to learn to play an instrument and to sing.

Its recommendations would determine how future funding could be best used in the future, he added.
'Shocking'
  "Evidence tells us that learning an instrument can improve young people's numeracy and literacy skills and their behaviour.

"It is also simply unfair that the pleasure of musical discovery should be the preserve of those whose parents can afford it."

"As part of that review recommendations will be made to determine how future funding can best be used," he said.

He added that decisions on central funding for music would not be made until after the review had reported.

General secretary of the National Union of Teachers Christine Blower said the cuts to music in schools were even more shocking in light of Michael Gove's announcement that he would be holding a review into music education in schools, claiming that it was a "sad fact" that too few state school children learnt an instrument.

She added: "Music in schools makes a contribution way beyond the straightforward exercise of learning an instrument.

"Children and young people can experience coming together in a creative environment which benefits them in other aspects of their school life."

Recording Pioneers - Stories from History


Knows your roots!? I came across a great website detailing a lot of the finest stories from the greatest pioneers in recording sound!


A good history lesson is due!!!

Peace!

FroBot

Sunday, November 21, 2010

Ancient trumpets played eerie notes

(Original Link - http://www.sciencenews.org/view/generic/id/65784/title/Ancient_trumpets_played_eerie_notes#sounds)

Scientists analyze tunes from 3,000-year-old conch-shell instruments for insight into pre-Inca civilization.

Listen to shell music.

Now you can hear a marine-inspired melody from before the time of the Little Mermaid’s hot crustacean band. Acoustic scientists put their lips to ancient conch shells to figure out how humans used these trumpets 3,000 years ago. The well-preserved, ornately decorated shells found at a pre-Inca religious site in Peru offered researchers a rare opportunity to jam on primeval instruments.

The music, powerfully haunting and droning, could have been used in religious ceremonies, the scientists say. The team reported their analysis November 17 at the Second Pan-American/Iberian Meeting on Acoustics in Cancun, Mexico.

“You can really feel it in your chest,” says Jonathan Abel, an acoustician at Stanford University’s Center for Computer Research in Music and Acoustics. “It has a rough texture like a tonal animal roar.”

Archaeologists had unearthed 20 complete Strombus galeatus marine shell trumpets in 2001 at Chavín de Huántar, an ancient ceremonial center in the Andes. Polished, painted and etched with symbols, the shells had well-formed mouthpieces and distinct V-shaped cuts. The cuts may have been used as a rest for the player’s thumb, says study coauthor Perry Cook, a computer scientist at Princeton University and avid shell musician, or to allow the player to see over the instrument while walking.
To record the tunes and understand the acoustic context in which the instruments, called pututus, were played, the researchers traveled to Chavín.

 As an expert shell musician blew into the horn, researchers recorded the sound’s path via four tiny microphones placed inside the player’s mouth, the shell’s mouthpiece, the shell’s main body and at the shell’s large opening, or bell. Similar to a bugle, the instruments only sound one or two tones, but like a French horn, the pitch changes when the player plunges his hand into the bell.

The team used signal-processing software to characterize the acoustic properties of each trumpet. Following the sound’s path made it possible to reconstruct the ancient shell’s interior, a feat that normally involves sawing the shell apart or zapping it with X-rays.

The researchers also wanted to know how the site’s ceremonial chamber, a stone labyrinth with sharply twisting corridors and ventilation shafts, changed the trumpet’s sound. To find out, the team arranged six microphones around the musician and reconstructed the sound patterns on a computer.
If the trumpets were played inside the stone chamber in which they were found, the drone would have sounded like it was coming from several different directions at once. In the dimly lit religious center, that could have created a sense of confusion, Abel says.

“Were they used to scare people while they were there?” asks Abel. “There are still a lot of things left open.”

Turns out, such questions about how sounds affect people and their behavior, an area called psychoacoustics, can be tested. It's a field of active research, and not just for ancient civilizations: Another group at Stanford is now studying how a room’s acoustics affects human behavior. In one recent experiment, researchers separated test subjects into different acoustic environments to do a simple task — ladling water from one bucket to another in a dimly lit room.
“What your ear can actually hear plays into how you would behave, or the psychological experience in the situation,” says Abel.



SHELL CACOPHONY

A group of conch-shell instruments made by a pre-Inca civilization sound similar to a kid learning to play the trumpet.Click here to listen.

ANCIENT TONE
A musician plays the fundamental frequency and the first overtone of a 3,000-year-old shell trumpet unearthed in Peru.Click here to listen.


What is Talent?

(Original Link - http://www.theatlantic.com/national/archive/2010/11/what-is-talent/66684/)

Thanks to Edward Tenner for alerting us to a new WSJ piece by Terry Teachout that attacks Anders Ericsson's so-called "10,000-hour rule." Teachout summarizes the Ericsson rule in the following way:

"To become successful at anything, you must spend 10 years working at it for 20 hours each week. Do so, however, and success is all but inevitable."

A superb straw man. So simple to understand, so easy to knock down. But think about it for a moment: Would anyone with half a brain actually argue that a simple *amount* of practice time could *guarantee* success? Of course not, and that's not even remotely what Anders Ericsson does. 

The real Anders Ericsson is one of the leaders of a fascinating new academic field called "expertise studies" which carefully deconstructs the longstanding notion of innate talent by looking for hidden components that might actually help to explain success.

This is what science does. It seeks to understand how things actually work rather than settle for mysterious formulations like "gifted," "natural-born," and "genius."  

Teachout also writes that "The problem with the 10,000-hour rule is that many of its most ardent proponents are political ideologues who see the existence of genius as an affront to their vision of human equality, and will do anything to explain it away."

I honestly do not know which proponents Teachout is referring to. The writers that I'm most familiar with on the subject of understanding talent and success -- Malcolm Gladwell, Daniel Coyle, Mihaly Csikszentmihaly, Geoff Colvin, Carol Dweck -- are all actually trying to understand what goes into talent and success. 

He might be referring to the title of my book, The Genius in All of Us, which some non-readers have misinterpreted as a blank-slate argument of pure egalitarianism. But, again, that's a straw man. No one here is arguing that we're all equal or equally capable of the exact same achievements. We all have differences, and are therefore assured of becoming different people.

When it comes to the question of individual potential, though, it's important to avoid what neuroscientist and musicologist Daniel J. Levitin calls "the circular logic of talent." "When we say that someone is talented," he says, "we think we mean that they have some innate predisposition to excel, but in the end, we only apply the term retrospectively, after they have made significant achievements."

So what is "talent"? Is it some magic or genetic stuff that gives some of us a springboard to success? The closer we look at the building blocks of success, the more we understand that talent is not a thing; rather, it is the process itself. 

Part of this new understanding requires a new insight into genetics that helps us get past the myth of genetic-giftedness. Genes influence our traits, but in a dynamic way. They do not directly determine our traits. In fact, it turns out that while it is correct to say that "genes influence us," it's just as correct to say that "we influence our genes."

Everything about our lives is a process, and we are indebted to Anders Ericsson and others for helping us to obtain a richer understanding of that process. 

It's interesting that Teachout pounds so hard on (nameless) obstinate ideologues who refuse to open their minds to evidence. Blind ideology is exactly what I'm seeing in his confident (and factless) assertion that Wolfgang Mozart's success as a composer (as opposed to his sister Nannerl's lack-of-success) is simply due to this: "He had something to say and she didn't. Or, to put it even more bluntly, he was a genius and she wasn't." Twenty minutes of reading about their early lives and the cultural context provides a much richer understanding than that. Why rush to enshrine a myth when we have so many rich facts and observations to help us come closer to a true understanding?

Teachout also writes that any suggestion of genius as a process "fails to account for the impenetrable mystery that enshrouds such birds of paradise as Bobby Fischer, who started playing chess at the age of 6. Nine years later, he became the U.S. chess champion." Again, why leap to "impenetrable mystery" when we can actually understand these things better? There are some terrific books out there now that help us closely examine talent and success. Why is Teachout trying to convince us not to examine the evidence and not to think about these things more deeply?

In his 1878 book Menschliches, Allzumenschliches (Human, All-Too-Human), Friedrich Nietzsche described greatness as being steeped in a process, and of great artists being tireless participants in that process:
 
"Artists have a vested interest in our believing in the flash of revelation, the so-called inspiration . . . [shining] down from heavens as a ray of grace. In reality, the imagination of the good artist or thinker produces continuously good, mediocre, and bad things, but his judgment, trained and sharpened to a fine point, rejects, selects, connects . . . All great artists and thinkers [are] great workers, indefatigable not only in inventing, but also in rejecting, sifting, transforming, ordering."
As a vivid illustration, Nietzsche cited Beethoven's sketchbooks, which reveal the composer's slow, painstaking process of testing and tinkering with melody fragments like a chemist constantly pouring different concoctions into an assortment of beakers. Beethoven would sometimes run through as many as sixty or seventy different drafts of a phrase before settling on the final one. "I make many changes, and reject and try again, until I am satisfied," the composer once remarked to a friend. "Only then do I begin the working-out in breadth, length, height and depth in my head."

Alas, neither Nietzsche's nuanced articulation nor Beethoven's candid admission caught on with the general public. Instead, the simpler and more alluring idea of "giftedness" and "genius" prevailed and has since been carelessly and breathlessly reinforced by ideologues. But we can do better. We have the tools and the evidence now to go beyond "genius," beyond "gifted," beyond "innate," and beyond "impenetrable." 

Who knows, maybe someday we can even catch up to Nietzsche.


Thursday, November 18, 2010

My 2010 Lessons for DJing - Less is More - by FroBot


Well...2010 will be finished here in about a month or so...and I have stopped DJing for the rest of the year. I have many things going on in my life like a video / audio company I am starting in Hawaii, and a vacation back to America. So I wanted to write a small piece about my biggest lesson of 2010 when it comes to DJing.

Now, before I go too far, I know tons of you avid ableton DJs are gonna rip me apart for this blog. Remember, this is an opinion, and doesnt necessarily mean its the best opinion...but its MY opinion.

Ok...so my biggest lesson of 2010 is "LESS IS MORE". This can be applicable in many ways...lets start with the most simple way...and I will build up from least important to the most important.

5. Less is more when it comes to your tracks content. I have noticed this year, that the tracks (in house and tech house) that have nice simple bass lines, nice steady swinging beats, and profound but simple lyrics, are ALWAYS getting the BEST response on the dance floor. Maybe its because the normal dance goer is not as musically inclined as the artist performing it, maybe its because from a technical stand point there is more room for certain frequencies to stand out and punch. But, what I THINK it is, is that producers, now in the days of digital releases, are looking to gain smash, and make their track the LOUDEST they possibly can. By making more simple grooves, this creates more room for compression, and ultimately more room for the final output level...letting you make a LOUDER track. I have noticed that most people dont notice all the great effects and envelope automation that I do...but rather, HOW LOUD the track is. The producers that can make these SUPER LOUD tracks, always seem to sound better than the LESS LOUD track that was on right before it. When the track is more simple, you can raise the overall levels pretty high (above industry standards and distorting )...and that in turn makes the BASS sound more bassy, and the kicks thump your chest more. This ultimately has the greatest effect on the crowd, rather than complex rhythms and melodies that make controlling the distortion over 0db rather difficult. So, when I get a new track, and toss it into ableton, I always check how loud it is compared to the other tracks. Even though I could control how loud the track is using the tracks individual gain...this is not CONTROLLED distortion as producers have done. They have spent countless hours raising the gain, and using mastering techniques to clean up the distortion....using their ears to decide HOW distorted it is. When you can keep the track at is normally produced level...yet its LOUDER...that is when you get a thumping track in the club. Even when I listen to a track at home on professional studio monitors, and as a producer, can hear the compression and distortion (especially in the crash cymbals), it never seems to be noticeable in the club...and definitely not noticeable to the dancers. So....back to simple, thumping, loud tracks! I know a from a purist stand point, its WRONG...but in the club...all that matters is making the peoples feet and bodies groove harder.

4. Less is more in terms of the amount of remixing you do. Again, I know tons of you DJs will disagree...but this is what I think. There is a BIG DIFFERENCE between scratch DJs and House DJs. In house...people want to hear tracks for a little longer so they can really groove on a track. This is not hip-hop, where you are playing vocally charged tracks that people know because they are remixes of TOP40 tracks. These are groovy, rhythmic tracks that people move to because of their dynamics and flow. Funky to the core. Remixing is a nice technique to do as a DJ, but use it sparingly. One reason is, most of the time, if you are a thoughtful, "searching for tracks" kinds of DJ...chances are...99% of the people at the club have never HEARD the song you are playing in the first place. So remixing it doesnt do much good because they dont know what it sounded like originally. It kind of defeats the point of live remixing unless they can realize you are remixing it. And another thing... What makes you think you can remix it BETTER than the original artist anyway. That artist spent COUNTLESS hours making their track, thinking about every little detail of how they wanted the track to sound. If you are remixing it live, you are only changing it to sound the way YOU wanted it to sound...not how the artist originally intended. By adding an accapella or something over it...you are now making the track into what YOU think is good, and not what THEY thought was good....and to be quite frank....99% of the time, the artist had it RIGHT in the first place....the DJ only ruined it. From a production stand point (which I will get into in my most important lesson)...and from a TASTE stand point. Usually, the reason we DJs remix a track is because we have heard the original so many times, that remixing it makes it sound FRESH to us. But FRESH does not mean BETTER. As DJs, we listen to TONS of songs. This creates a vicious "ADD style" chain reaction. The more tracks we listen to, the more we want to hear fresh new tracks. This makes us get sick of certain tracks more quickly...the more and more we listen to music. So, what do we do when we really like a track, but have heard it too many times...remix it. But again, this doesnt make it BETTER...it only makes it DIFFERENT. In the future, I will only use remixing techniques if I absolutely feel it enhances the track, neurally connects to the audience, and is worth the effort. I wont do it for the mere fact of remixing it.

3. Less is more in terms of the amount of effects I use. To be quite frank...its OVERDONE. All these filters, delays, flangers...blah blah blah...its old! Some DJs do it ALL THE TIME. It sounds fucking horrible. First off, your ruining the dynamics of the track by doing it. Since music notes have fundamental frequencies and harmonics...removing certain ones with bandpass filters ruins other parts of spectrum that your arent filtering. Low pass and high pass are SIMPLY overdone. They can be used NICELY...HERE AND THERE...on build ups...or artistically where beats are becoming stagnant. But, in the future I will do less. If you are an ableton DJ...you have INFINITE amount of OTHER ideas you can do to make an interesting mix using clip envelopes, and more thought out mixes....rather than just turning some bullshit knob for the mere fact that you are BORED. As a house DJ...its OK to rock out a tune...and ENJOY IT...listen to it in the way it was intended.

2. Less is more! Im going back to the basics. To me, the art of mixing is just that - MIXING! Focusing on the seams between tracks and making them SEAMLESS! That is what I want to do in my next year, and what I have been focusing on. So many DJs are up there turning tons of knobs, doing all these crazy DJ effects...but when it comes time to switch between one track and the next...its a horrible...noticeable change! Instead of going nuts on your gear...how about using your time up there to more thoughtfully think about your next track...or even better yet...when you practice at home...remember what works and what doesnt. Its ok to pre-plan a little bit, as long as you are able to change depending on the crowd. But, DJing is about making nice seamless transitions between ONE track and the next! With ableton live, you have no EXCUSE for bad mix points besides your own negligence to prepare, or ability to hear what goes with what. With tools liked mixed in key (for only 40 dollars), and abletons ability to warp and match timing of tracks...there is really NO EXCUSE. Sometimes, DJs seem to feel like if they are standing up there not doing anything...they are doing something wrong. But, your HANDS dont have to be doing anything...how about instead...your BRAIN! Ultimately, its what makes the people enjoy the music and dance...that is what matters. And for the most part...they dont know what your are doing anyway...but they WILL notice when you change between 2 tracks drastically. So, make those mix points seamless, and spend more time thinking about HOW to make them seamless using envelope automation or skills. And dont worry about those people looking your screen...or that club owner who knows a thing or 2 from past DJs to judge whether you are doing A LOT or too little. THEY DONT MATTER. What matters, is good, thumping beats coming out of the speakers...and not what that 1% of other DJs that happen to be in the club THINK about the complexity of mixing. They are most likely just jealous anyway thinking "why is it that this DJing is doing so much less than I do...is so much less talented than I am, but the people are grooving like its no tomorrow". Fuck em, because ultimetely...you are the smarter DJ...and not the one just showing off the capabilities of your computer and software.

1. Ok...now the the most important reason why LESS is MORE! Ok...start the hate speech..."FroBot...your an idiot...you are wrong...etc etc). Alright...here it is. Ok...so since the release to the APC40 and novation launchpad...I have seen a drastic excess of ableton DJs. I, just like them, when I got my launchpad and VCM-600, loved the fact that I could download shitloads of loop samples...play them all together....improv a set...and make a NEW track that no one has ever heard on the fly. It is cool, and really works neurally with the need to HEAR new sounds constantly. Its almost like a disease we computer DJs have. After realizing the capabilities of your software...you seem to want to exploit them by running 12 tracks at the same time...individual HH and kick samples...etc etc. It is really cool...and for a LIVE style performance...where you are playing with BANDS and improv musicians...it is truly great. I definitely enjoy it more than DJing, because I have more control over unique sounds, and it really fuels my creativity. But...DJing is not about this. Its about providing a thumping beat that is rhythmically stable and full of nice changes, build ups, and thought out construction. The Key here is - IMPROV ABLETON AUDIO IS NOT MASTERED AUDIO!!!!!!
This is key to remember here. There is a HUGE DIFFERENCE between a song made by a producer that has been compressed, balanced, and made to perfection - and running multiple audio tracks together, improv style. Mastering is a KEY element of making a dance track...and real producers know this. There are so many important elements that go into getting the THUMP out of your kick, the WARMTH out of our bass, and the frequency separation of all your elements. Steps involving compression, harmonic balancing, EQing, overall reverb, harmonic exciters...all very precise configurations depending on the frequencies being used. When you are playing with multiple samples, especially in improv style...you are taking this element out of the track making process...leaving...what producers consider...a track before the the mastering stage...or even worse...the mixing stage. Each sample that you play, is using certain frequencies in the spectrum. In order for things to sound right and powerful, is important to make way for each sound to stand out clearly...which means TIGHT EQing. Using notch filters to remove certain elements is CRUCIAL in making a thumping dance track. Especially in your kick and bass...most producers use sidechaining when producing to make sure that the kick and bass have nice equal room to stand out and shine...and that their high hats have nice placement, ALONE...to stand out. In an even more juvenile concept...how about even the KEY of the god damn sample. Tons of people arent even checking the keys of their samples...having a kick at say D, and bass at C. It just sounds terrible!!!! Without an understanding of your parametric EQ, spectrum analyzer, and the concept of detuning your samples...your cant even start to improv using your launchpad other midi controller. The overall result is a LESS powerful sounding set...and obviously sounds different to the DJs before and after you who are using vinyl, CDJs, or even a computer doing less complex mixing. I didnt even get into how HARD it is to do all this mixing correctly in the first place...and many of the people I see doing this style of DJing are NEW to DJing...and they can run 6-12 tracks at the same time without fucking up? That is a whole other point in itself. NOW I understand why many producers still DJ on CDJs...but are really good at understanding ableton. Because in the end...all that shit doesnt matter...its about a good, rocking beat. Even if you are just adding a few samples on top of an already mastered track... doing that...you are RUINING the final mastered sound of the track. When you add frequencies to a mix...it not only puts new frequencies in...but can change correlating harmonics of other sounds. Just look at your spectrum EQ...sit down with a professional...and prepare to cover yourself from the vomit that is certain to be in your lap.

Now, some of you proficient with ableton live know some workarounds for this. Some I have heard is using Ozone, and analog warmers...etc etc. Yes, these are all good ideas, and can help to clean up your sound...but none of them can even compare the the results of a nicely mastered track. There is a video where deadmau5 showed how he gets the final sound from his improv style sets. Here - http://www.youtube.com/watch?v=GTCqeWu094I It takes TONS of processing, and extreme knowledge of digital music before you can actually produce a live set in improv form that stands up to real mastered tracks. The fact that he already has a good background in music and digital production, also helps with this. He really isnt improvising...he knows what his sounds are doing, and mixes them QUICKLY....but is still thinking about the placement of certain frequencies. This is not the same as loading up a bunch of samples and playing them all at the same time. Save that for JAM SESSIONS, or LIVE EVENT gigs, where other artists at the event are ALSO using un-mastered audio samples...or live instruments. When playing these kinds of events, your final output sound is not intended to be CLUB THUMPING, but rather an artistic musical creation where value is placed on the art and not the power of the sound. I have YET to see a local ableton DJ that plays in this improv style in the clubs...even come close to getting the sounds of a mastered track. In the end, you are left with weaker dynamics, lower volume, and ultimately have to push the gain on the clubs mixer. That is the only option you are left with...but still will not compete.

1.2 - Oh...and less is more for one other very important reason! You can get DRUNK! When you are at the club...you wanna have a good time too! Dont forget to enjoy yourself...but sometimes, you dont have much of a choice. By keeping your setup simple...all the tequila shots and free beer wont affect your perfomance much even if you have a 4am set. Since its less complex, you should be able to it, blurry eyed and all!

Well, that concludes my "Lessons for DJing in 2010" rant. Please take it with a grain of salt. I am in no way condoning a lazy set...but I AM saying to THINK more than you DO. Use your brain a little more, hit the audio books and learn a little more, and watch your crowd carefully. Give them something to groove on, and realize you dont have to USE everything JUST because you have it. Do what sounds right, and not what sounds complex. The girls will thank you with a few extra hip swings and the guys with a few more "heads down" "in the groove" dance moves.

Peace!

FroBot

The Science Of Djing- Music Chills and Pop Cycles

(Original Link - http://www.theblastbydigiwaxx.com/2010/11/16/the-science-of-djing-music-chills-and-pop-cycles/)

Ever wondered why you get chills when listening to music? Perhaps you might have suspected cycles of pop music seem to follow economic cycles. Well, writer Yale Fox has an entire blog dedicated to studying the “science of nightlife culture” called Darwin Vs The Machine that has looked at both subjects. In today’s article he goes into the chill theory and why popular music may pick up in pace as the economy slows down.

Have you ever listened to a song that’s given you shivers? The pleasant feeling of chills running up your spine are actually called Frissons. What is it about music that induces this feeling? I listened to this lecture by Dr. David Huron that discussed his theory behind it.

Biologically, chills are called piloerection. They are characterized by a pleasurable, cold sensation which sometimes produces a shudder. Chills are something we can normally experience based on certain stimuli.  At the core, these chills exhibit themselves as a result of surprise. It is the failure for the organism to predict their environment and what is going to happen next. The neurotransmitters released during this type of response are catecholamines; epinephrine (adrenaline) and dopamine. This brief and pleasurable scare is equivalent to the reason we enjoy rollercoasters and watch horror films.
Here are some other examples of when we experience these frissons.
  • Stepping in to a warm bathtub
This is a classic example of the organism not being able to predict their environment. The body feels a sudden change in temperature and reacts by eliciting the fight or flight response.
  • Nails on a chalkboard, or a loud scream
It comes as a surprise again, and is usually a sign of warning or help from another member of our species. Whether running to help, or running for safety it’s an indication that something unexpected is occurring in the environment.

HOW CAN I WORK THAT INTO A SET?

A large part of music that we enjoy is the balance between predictability and unpredictability. This is probably a good way to think about track selection for your DJ sets, trying to put yourself somewhere in between predictable and unpredictable place. Perhaps adding an interesting effect or unique twist on a familiar track would be enough to induce that wonderful chill we associate with a great musical moment.

Personally, the only music that really gives me chills is lyrically based. More specifically, Punchlines and complex verses. This still fits the theory, as usually these lines are totally witty and unexpected. There’s no way of really predicting the verse before you hear it. The fact that it is a heightened emotional response means it likely becomes imprinted for future reference. Additionally, if I know the words to the song- I find I don’t get chills when I hear it again.

Virtually impossible to conduct a lab, different people are surprised at different times. I think the best thing to do is put this up to open debate. If readers could post their comments and suggestions- or specific songs and points in the song where they experienced chills.

DOW VS BPM- ARE THEY RELATED?

I took a database of every song that has ever touched ground on the billboard top 100 charts since 1955-2009. Songs were analyzed and sorted in terms of two important characteristics; (i) tempo and (ii) modality. Tempo is measured in beats-per-minute, and is the general speed of the song. Modality or mode refers to whether or not the song is in a Major or Minor key. Major keys sound happy, and Minor keys sound sad- even an untrained ear is able to easily detect this.

Making beautiful music can strike a sour note


(Original Link - http://news.health.ufl.edu/2010/14721/multimedia/health-in-a-heartbeat/making-beautiful-music-can-strike-a-sour-note/)


Professional musicians are accomplished artists at the top of their field. And although their job is glamorous, health practitioners are tuning in to the fact that it can be stressful, too.
Consider the orchestra. It’s not unusual for members to sometimes be gripped by stage fright, or worry about becoming disabled and unable to perform. Their work can be physically demanding, and requires high levels of stamina.

Job frustration, a workplace hazard shared by many of less lofty vocation, is another source of a veritable symphony of stress. One reason? Musicians must deal with the frustrating combination of being highly skilled and accomplished while often having little authority about what and how to play. This can take its toll in various ways, but musicians must find ways to cope so they can keep making beautiful music.

A pain in the neck… or the back or the shoulders… is one way stress can strike. But in a new Norwegian study, orchestral musicians did not have higher levels of those complaints than others. That might be because people whose pain is debilitating would resign from the orchestra.
Members were more likely to complain about gastrointestinal problems, mood changes and fatigue. And those complaints were linked to higher stress, as evidenced by high saliva levels of the hormone cortisol.

It turns out that even coping mechanisms are linked with stress levels. Musicians who dealt with work-related problems by seeking social support or distractions had higher stress levels than those who tackled problems directly and tried to look for solutions.

Tuning in to maintaining good mental and physical health is important for handling daily stresses. And it certainly is key for musicians and music students who want to keep the music playing.

The Science of Music - From Rock to Bach

(Original Link - http://www.newsobserver.com/2010/11/15/803788/the-science-of-music-from-rock.html)


What is a musical note? This is one of the deceptively simple questions asked and answered by John Powell in his fascinating book, "How Music Works."

It's an easy question, you might think. A musical note, as created by a musical instrument or a voice, is determined by the frequency of the sound waves produced. Wrong, that would be the note's pitch. Well, one can surely form a note by simultaneously depressing several related piano keys. Nope, that's not a note; that's a chord. A note, the basic building block of all music, is a repeating pattern of sound waves (which distinguishes it from the chaotic sound waves of nonmusical noises). It "consists," Powell says, "of four things: a loudness, a duration, a timbre and a pitch."

Starting with the four properties of a note, the author, who is both physicist and musician, uses easy-to-follow, conversational language to lead the reader into the science of music. He explains every common musical term, from "key" to "bar" to "scale." He differentiates a concerto from a sonata and shows how composers use chords to create harmonies. He brings his explanations to life with a wide range of examples. For instance, a certain type of chord called an arpeggio is found in "Hotel California," by the Eagles, while a complex harmony called counterpoint was used by Bach in his concertos.

After explaining the meaning of musical terms, Powell interprets those strange-looking symbols found in a piece of sheet music. It is amazing that after a few hours of Powell's explanations, a musical novice like me can begin to read music. And for those who would like to use their newly acquired musical education to make their own music, Powell offers advice on how to choose an appropriate first instrument. Violins are too hard; pianos are easier.

For those who approach music more passively, Powell provides a chapter on how and where to listen to music. Instead of spending $75,000 on "a special listening room," he advises us to install our equipment in a normal room, then move the speakers around to get the best sound. He also answers a question that is being passionately debated by audiophiles all over the world: "Are vinyl records better than CDs?" The answer, he says, is no. Those favoring vinyl are victims of "technology nostalgia."


Read more: http://www.newsobserver.com/2010/11/15/803788/the-science-of-music-from-rock.html#ixzz15gC8COAf