Archives / Search ›

soundsource: a few examples

A few weeks ago, I added links to some of my smaller OS X projects to my software page. One of these projects is a command line version of Rogue Amoeba’s now-discontinued SoundSource.

This tool, which I have rather unimaginatively named soundsource, is the basis for a number of scripts I have written. I recently enhanced some of the scripts and figured I might as well post them here as inspiration for others who are drowning in a sea of audio input and output devices connected to their Macs. I run these scripts from FastScripts with corresponding keyboard equivalents:

One script handles the various headphones I use. Macs of the last few years support the same microphones and remotes as Apple’s iOS devices, and the generally decent quality of the microphones on many iOS compatible headsets is even adequate for dictation in a quiet room. I’m dictating this blog post, for example, with a Bose QuietComfort 20i headset. The accuracy isn’t quite that of my usual setup, but it is entirely sufficient for short-term usage, and it sure is nice to only have one thin cable plugged into my Mac.

Sometimes, however, I just have regular headphones plugged into the jack, and in this case there is no corresponding microphone input. Apple’s recent Macs also do a great job of dynamically changing the available audio input and output sources advertised to the OS as you connect and disconnect devices.

And in yet other cases, I use a USB headset. In any event, I want a way to “just start playing (and recording, if possible) through my headphones”. Here it is, using Growl to display the results:

#!/bin/sh

notify() {
	/bin/echo -n "Input: $(/usr/local/bin/soundsource -i)" |
		/usr/local/bin/growlnotify \
			-I /System/Library/PreferencePanes/Sound.prefPane \
			-d net.sabi.soundsource $1
}

# succeeds if headphones connected to jack
if /usr/local/bin/soundsource -o 'Headphones'; then
	# succeeds if headphones have integrated microphone
	/usr/local/bin/soundsource -i 'External microphone' || /usr/bin/true
	notify Headphones
else
	/usr/local/bin/soundsource -o 'C-Media USB Headphone Set'
	/usr/local/bin/soundsource -i 'C-Media USB Headphone Set'
	notify 'Plantronics Headset'
fi

The notification looks like this:

growl notification
Note that I take advantage of soundsource exiting with failure if it is unable to switch to the desired audio input or output device.

The second script handles changing the output to my AirPort Express, Furrball. Unfortunately, my home Internet connection is currently via my landlord’s somewhat unreliable Wi-Fi, and the AirPort Express drops off the network with depressing regularity. FastScripts does a great job of displaying status when the script fails, but because of the frequency of this failure, I recover from it by power cycling the AirPort Express. Note that, at least in OS X 10.8, switching to a nonfunctional AirPlay device may appear to succeed but immediately switches back to another device; you need to confirm the change.

#!/bin/sh

notify() {
	/bin/echo -n "Input: $(/usr/local/bin/soundsource -i)" |
		/usr/local/bin/growlnotify \
		    -I /System/Library/PreferencePanes/Sound.prefPane \
		    -d net.sabi.soundsource $1
}

/usr/local/bin/soundsource -o Furrball || /usr/bin/true
if [[ `/usr/local/bin/soundsource -o` = Furrball ]]; then
	notify Furrball
else
	notify "Power cycling Furrball..."
	/usr/bin/osascript -e 'tell app "XTension" to turn off "Furrball"'
	/bin/sleep 1
	/usr/bin/osascript -e 'tell app "XTension" to turn on "Furrball"'
	count=0
	while true; do
		notify "Waiting for Furrball ($count)..."
		if /sbin/ping -qot 1 furrball.local; then
			/usr/local/bin/soundsource -o Furrball
			notify Furrball
			exit 0
		fi
		count=$((count+1))
	done
fi

Updating dynamic DNS on FreeBSD with ldns-update(1)

Because of the many UI/feature regressions in AirPort Utility 6 and lack of attention to AirPort Extreme firmware bugs (currently, my family’s one-generation-old AirPort Extreme has issues with dynamic DNS updating and drops SIP traffic), I’m in the process of migrating to an embedded router platform, PC Engines’ apu1c.

This migration has been a trying process, between buggy firmware, discovering too late that OpenBSD doesn’t support 802.11n, and FreeBSD PF bugs/lack of documentation (FreeBSD PF diverged significantly from OpenBSD PF a while back). I just ran into a fun problem where the wireless card comes up with a bizarre PCI ID that doesn’t configure:

none1@pci0:5:0:0:	class=0x020000 card=0x00000000 chip=0xff1c168c rev=0x01 hdr=0x00
    vendor     = 'Atheros Communications Inc.'
    device     = 'AR5008 Wireless Network Adapter'
    class      = network
    subclass   = ethernet

versus the correct information I got after a power cycle:

ath0@pci0:5:0:0:	class=0x028000 card=0x3099168c chip=0x002a168c rev=0x01 hdr=0x00
    vendor     = 'Atheros Communications Inc.'
    device     = 'AR928X Wireless Network Adapter (PCI-Express)'
    class      = network

Hopefully this is not indicative of the wireless card dying.

I’m still using ipfw/natd for the moment, though I will try again with PF at some point because I’m unaware of a way to make UPnP and NAT-PMP work otherwise.

As I mentioned above, the AirPort Extreme implementation of RFC 2136 dynamic DNS has problems. In addition to the updating-every-few-seconds bug above, Apple’s NTP servers were returning times far enough off correct that the signature was failing. You can’t even configure the NTP server in AirPort Utility 6, of course, but thankfully I was able to hack AirPort Utility 5.6.1 into working on current OS X versions.

FreeBSD 10 no longer ships BIND and its nsupdate utility. Instead it includes LDNS and Unbound, but not LDNS’s associated “example” utilities, notably the dynamic DNS updater ldns-update. So I installed the LDNS package and promptly discovered several bugs in ldns-update. Thanks to some generous help, I was able to get the FreeBSD-packaged version of ldns-update to work—with one exception: dynamic DNS updates being sent to port 5353 rather than 53.

Until the port change makes it into a LDNS release, here’s a patched amd64 package built on FreeBSD 10.0.

Finally, here’s my /etc/dhclient-exit-hooks that updates the IP address on DHCP address changes:

#!/bin/sh -ef

case .$reason in
    .BOUND | .REBOOT)
	;;
    *)
	exit 0
esac

HOST=hostname_goes_here
ZONE=zone.goes.here
KEY='KEY_GOES_HERE'

update() {
    # ldns-update domain [zone] ip tsig_name tsig_alg tsig_hmac
    # this script assumes the zone is the same as the tsig_name
    /usr/local/bin/ldns-update $HOST.$ZONE $ZONE $1 $ZONE hmac-md5 $KEY
}

update none
update $new_ip_address

Note that the usage message you get from running ldns-update is more useful than the man page: it includes the important none and zone options.

Update, 6 December 2014: Fixes for the above issues (including an updated man page) have been integrated into LDNS, though not yet into a release. I rebuilt my FreeBSD package, also incorporating the FreeBSD port changes: here you go.

Creating a dictation buffer

As I mentioned in a post last month, I recently upgraded my Windows dictation setup to Dragon NaturallySpeaking (DNS) 12 and Word 2013.

This upgrade broke the Emacs dictation interface (vr-mode) I had earlier used with DNS 8 and 10. But it also encouraged me to explore new dictation workflows using Natlink directly from my own Python scripts.

Primarily, I have since switched from editing entire documents in Windows, or using the clipboard to transfer text, to using my minimal dictation surface as a buffer while editing documents on the Mac side. I was inspired to do this after I spent a day with a radiologist observing PowerScribe 360 in use.

PowerScribe is a dictation system which uses a dedicated handheld speech controller. Rather than being inserted at the insertion point like typed text, dictated text is buffered and “placed” by buttons on the speech controller or by clicking. You can also choose to discard the dictated text, accompanied by a cute sound effect. Color coding and other affordances distinguish templated from dictated and typed text. (This would be much easier to show than to describe, but I couldn’t find any good examples of the system actually in use on YouTube.)

Thanks to PowerScribe, I realized that it’s actually easier for me to work with shorter fragments of text, a sentence or a paragraph at a time, rather than importing the entire document at a time. What I’ve implemented so far is on GitHub; here’s a video showing it in use and explaining some technical details:

There are some disadvantages with this system. If you do want to dictate individual words or something smaller than a sentence into the buffer, you will need to manage the spaces, capitalization and punctuation yourself, since your Word document in the dictation buffer isn’t aware of the surrounding contents. In reality, I seldom find this a problem; saying “no caps” or “lowercase that” from time to time isn’t overly arduous. I could theoretically go even further and implement the Mac side of the solution with an input method rather than services and scripts, which would give me access to the surrounding context, but I think that would be a lot of work for relatively little added benefit.

I’ve still got some more work to do; while writing this post, I realized I need a “discard” command much like the one in PowerScribe. (Done.)

While my setup isn’t yet to the point of being usable “out of the box”, I hope that this brief exploration will help other technically inclined dictation users expand their workflows.

Creating a minimal dictation surface with Word 2013

As I discussed in my previous post, I do my serious dictation in a Windows 7 virtual machine. Having recently upgraded my dictation setup and transferred it to a new Mac, I figured it'd be a good thing to share.

While I don't have any experience with its competitors, VMware Fusion 6 does a good job of making my USB headset plugged into OS X available to Windows for dictation, without interfering with its use in OS X. Dragon NaturallySpeaking calls VMware's audio source “Microphone (Mic-In).” In earlier versions of VMware Fusion, I had mapped my USB headset exclusively to Windows for dictation. This also worked well, but I'm not sure if it was actually necessary.

Most of the time I'm not actually editing documents directly on Windows; the OS simply holds my text on the way to its destination in a Mac application. Dragon NaturallySpeaking (and its Medical derivatives) include a WordPad knockoff called DragonPad. Its stated purpose is exactly as such a dictation intermediary, but the user interface looks like it was frozen around Windows 2000 and it only supports single undo — not even redo. So, it's a bit of a nonstarter.

My next best bet is Microsoft Word, for which NaturallySpeaking includes a COM-based addin. Previously, NaturallySpeaking 10 limited me to using the 32-bit version of Office 2010; with version 12, I can use 64-bit Office 2013.

Happily, some of the Office 2013 changes made to support smaller-screened touchscreen tablets have helped my use case of a minimal "dictation surface" sitting in the corner of my Mac's screen. Here's how I set things up:

  • Use VMware Fusion's "Single Window" view, rather than its Unity view. The Word view options I describe only work if the Word window is maximized.  In Unity view this means maximizing to the Mac’s screen, whereas in the Single Window view, you can resize the VM screen as needed.  It's nice to be able to see your other Mac apps as you dictate (or not; I use Shroud’s keyboard shortcuts to alternatively hide everything else but the VM window when I really need to focus).
  • Set the Windows taskbar to auto-hide.
  • Change Dragon NaturallySpeaking’s DragonBar mode (in Tools > Options > View) to “Tray Icon Only”. I find the continual display of audio levels distracting; Apple’s dictation display, despite doing something conceptually similar, is less distracting, probably because it doesn’t change color. I have a keystroke, Ctrl-Alt-`, set to toggle the microphone and don't find it a problem that I sometimes get extraneous dictation in my document if I forget that the VM is listening in the background. (Actually, sometimes it’s pretty humorous.)
  • Turn off Word’s Start screen so you get an untitled document when you start Word.
  • Switch Word to draft view, via Alt, W, E or clicking the Draft button in the View ribbon tab. Those of you who have been using Word since 4.0 or earlier may remember this used to be the default view, but it’s been marginalized in favor of more WYSIWYG alternatives in recent Word versions. However, this heritage explains the corresponding dictation command, “normal view”. (If you accidentally say “draft view”, you’ll probably find everything changes into Courier; say “turn off draft mode” to fix that.) The main advantage of draft view is that you can resize the window without changing your font size; it's also more space efficient, as it doesn’t display your margins.
  • Outline view (Alt, W, U, or say “outline view”) is also a good choice for dictation. though Word is no OmniOutliner. Say “new line” rather than “new paragraph” or you’ll dictate a bunch of empty headings. “Tab” and “press shift-tab” will indent and unindent respectively.
  • Set Word's ribbon to auto-hide, via the button between the help and minimize buttons at the top right of the window. This maximizes the window if it isn’t already; it also hides the status bar, window title bar and most other chrome. A long button across the top of the window labeled with an ellipsis will restore the chrome if you need it, as will a press of the Alt key.
  • Experiment with Office themes (Options > General > Personalize your copy of Microsoft Office). The White theme is more trendy but I prefer a bit more separation between content and chrome, so I picked Dark Gray.
  • Consider disabling the cursor animations (smooth movement of the insertion point as you type) if they're as disconcerting to you as me.

Once you have your view set up, you'll find that Word reverts to Print Layout view for new documents. Unfortunately, to solve this problem you must delve into the crufty world of Office automation with Visual Basic for Applications. From the look of its toolbars, the VBA editor last had serious work done in the Office 2003 timeframe; most of it appears unchanged since VBA’s inception.

If it isn’t there already, add the Developer tab to your Ribbon (Options > Customize Ribbon > Main Tabs). Click Developer > Visual Basic, or press Alt, L, V.  Select the Normal project if you haven’t already to put code in the template, and paste in the following (if you’ve already got code in there, I trust you know what to do):

Sub AutoExec()
    ' Wait until a document opens.
    Application.OnTime Now, "AutoNew"
End Sub

Sub AutoNew()
    ' Ensure that the draft font isn't used
    ' (e.g., if you say "draft view" by accident)
    With Dialogs(wdDialogToolsOptionsView)
        .DraftFont = False
        .Execute
    End With
    ' Draft view is wdNormalView.
    If ActiveWindow.View.Type = wdPrintView Then
        ActiveWindow.View.Type = wdNormalView
    End If
    ' If window isn't maximized, ribbon doesn't collapse fully.
    Application.CommandBars("Ribbon").Visible = False
End Sub

Update: I have posted an updated version of the above macros to GitHub.

(There's some incorrect information on the Internet about scripting draft view in Word, for example here. The issue, as above, is that Draft view used to be Normal view, and still is Normal view both from VBA as well as in Dragon voice commands. View.Draft, despite the name, controls the font.)

Save, quit and restart Word; you should find yourself with a minimal dictation surface ready for your use:

Word 2013 dictation setup

A related note: I experimented with Windows Live Writer for this post, versus my usual process of copying and pasting into MarsEdit. As long as I turn off the “Blog Theme” button (which causes problems unrelated to dictation), dictation into Windows Live Writer works acceptably. The biggest issue is the markup ending up all one line in WordPress, despite looking fine (seriously — a Microsoft tool that generates tolerable markup!) in Windows Live Writer. Smaller issues include the results box appearing in the top left corner of the screen regardless of my cursor location (normally it appears at the insertion point) and dictation inserting unnecessary newlines, particularly in a bulleted list.

OS X dictation alternatives

I dictate to my computer a lot. It helps me write faster and saves my hands for other pursuits.

In the last few years, dictation, both of the local and network-hosted variety, has improved to the point that this choice is no longer an infuriating time sink. Coding, versus writing prose, via dictation is still in its infancy and I continue to anxiously await Tavis Rudd’s release of the dictation system he demoed at several conferences last year (warning: videos may be NSFW thanks to some synthesized expletives).

In a conversation on Twitter earlier this week I noted that, despite considerable enhancements in the past few years, dictation on OS X doesn’t get discussed much — hence this post.

If you’re going to be playing with dictation, make sure you have a decent headset, properly positioned. Wired, noise-canceling USB headsets are not expensive, and even though Apple’s been adding microphones and improving noise canceling on their Macs recently, you still do better with a headset. If the dictation system you’re using doesn’t include an audio setup step, just record and play back some of your own speech to make sure it’s audible and relatively free of background noise.

On OS X, you have 4 choices for dictation:

Networked dictation

Networked dictation was introduced in OS X 10.8 Mountain Lion.  It’s similar to the dictation service on iOS, and benefits from its use by Siri. I appreciate its well executed incorporation into the OS: you can dictate effectively into nearly every text field everywhere; you can easily start and stop dictation from the keyboard; dictation alternatives (blue dotted underline) are part of the Cocoa text system, and dictated text nicely integrates its capitalization and sentence structure with the surrounding material. The software on the other end of the network has a huge vocabulary, including medical terms.

Usability disadvantages with this method of dictation as currently implemented include:

  1. no trainability (though, given it’s designed to be a speaker-independent system, this is less of an issue)
  2. no real-time feedback: dictation happens in 1-minute batches
  3. no editing by voice
  4. no error handling whatsoever. If the server fails to respond or recognize your words, up to a minute of spoken text is lost. This is somewhat understandable on iOS, but given the essentially infinite resources of OS X in comparison, it’s not defensible there. Ideally, I’d expect audio to be saved as a text attachment for deferred recognition, much like the Newton did with ink text.

There are also privacy issues, of course. I’m careful not to use this service to dictate anything sensitive, regardless of the promised or actual handling of my data.

“Enhanced Dictation”

OS X Mavericks (10.9) introduces “Enhanced Dictation”, a locally hosted version of Nuance’s recognizer. It’s not installed and off by default; you can turn it on in System Preferences. Like OS X’s networked dictation, Enhanced Dictation is not trainable and doesn’t let you edit by voice, but it does let you mix keyboard/mouse editing and dictation. While it does provide the feedback expected of a local recognizer and does away with the one minute dictation limitation, it’s the only one of these options I find unusable in practice.

Enhanced Dictation’s omissions of training and editing likely protect sales of the Dragon Mac products (discussed below). The bigger issue is that this seems fundamentally a speaker-dependent system without a method of training, resulting in frequent dictation errors you can’t fix. The vocabulary seems smaller than the networked alternative, though because of its frustratingly high error rate, I haven’t done a lot of testing. It also uses a lot of memory.

Dragon products

Nuance offers Dragon Dictate for Mac, MacSpeech Scribe and Dragon Dictate Medical. The Mac-specific components of these products and their predecessors have always been buggy and flaky. My experience with the support and sales surrounding them have ranged from incompetence to sleaziness. I have purchased several versions and upgrades of these products going back to the original pre-OS X, Philips recognizer-based versions, but I’m not going to keep supporting software that is this poorly developed, sold and supported.

Windows in a virtual machine

Nuance’s Windows dictation products (Dragon NaturallySpeaking and Medical/Legal) are better than their Mac equivalents, though that’s not saying a lot. The UI is a scattered, slowly-evolving mess; true interaction between keyboard/mouse and voice editing is limited to individual versions of specific applications, and the medical product is expensive (upgrades are $500 on sale).

The main reason I dictate into Windows is the ecosystem surrounding the Dragon products there. There are quite a few abandoned research projects and other near-abandonware to contend with, but it’s possible with some effort to construct a productive system. What I’ve done thus far is nowhere near what Tavis Rudd did, but it works for me. Natlink is a Python framework for building recognition systems, with several macro languages/frameworks built on top including Unimacro, Vocola and Dragonfly (the basis of Tavis’s system).

Microsoft also bundles speech recognition with Windows these days; I’ve used it very little, but it does work with Dragonfly.

My choices

I use OS X’s networked dictation for brief passages, and a Windows 7 VM for anything longer, like this post. I recently upgraded my Windows environment to the current Dragon Medical 2 (equivalent to NaturallySpeaking 12) and Word 2013. More on that setup is coming in my next post.

Older Posts ›