Archives / Search ›

The Dash: disconnecting from the Dash side

Back in April, I noted “You can't initiate a disconnection or pairing from the right Dash once it's connected.” This is still technically true in the current firmware (1.5.1) — but I just discovered a convenient workaround, at least on iOS. Typically, I find this an issue when the Dash is connected to a device across the room (or inside my bag when I'm on my bike) and I want to pair it with something else closer to me.

Here’s how to do it:

  • Tap and hold on the right Dash until you hear the tone.
  • Wait another second or so until you hear the “Siri is listening” tone.
  • Say “turn off Bluetooth.” The iOS device does just this, severing its connection to your Dash.
  • You can then connect/pair the right Dash to another device.

The Dash firmware 2.0 is now in private beta testing. Unfortunately I didn't respond quickly enough to the call for testers on Facebook to get in the pool. The advertised list of upcoming features is pretty enticing:

  • Enhancements to activity tracking, especially for swimming and cycling
  • Changes to the feedback of metrics during activities; metrics are also logged in the Bragi App
  • Calibration of The Dash sensors to improve accuracy
  • Major enhancement to the speech quality during phone calls
  • Changes to audio playback to improve clarity and quality, as well as significantly boosting the maximum volume level
  • Improvement to the Bluetooth & BLE connectivity with other devices and apps, as well as implementing security during BT pairing and bonding to ensure data privacy
  • Implement more remote data channels with the Bragi app

The Dash is getting a lot of competitors in the cord-free Bluetooth headset market. I hope Bragi is able to keep up and realize more of their vision, while fixing practical issues such as those related to pairing and Bluetooth range.

Dragon NaturallySpeaking roaming user profiles with Apache

Some editions of Dragon NaturallySpeaking (including Medical) support a Roaming User Profile feature. With this, you can store your voice profile on a server and download it to/upload it from computers on which you dictate. Like most aspects of Dragon NaturallySpeaking, it’s unnecessarily complex and flaky, but I got it to work in my distinctly non-enterprise environment a few weeks ago. For anyone else in a similar situation who wants their training, custom dictionaries and commands to follow them, I hope the following is helpful.

I assume here you have an existing local user profile to migrate. Dragon NaturallySpeaking’s WebDAV client is inefficient and includes many configuration options of dubious utility, but does (eventually) work. For WebDAV on IIS (or SMB), the instructions in the administration manual appear relatively complete. The manual mentions Apache compatibility but includes no setup information, nor could I find any elsewhere on the Internet. So, my server examples use WebDAV with the Apache HTTP server 2.4.x.

Setting up a WebDAV server

It's 2016 and you should be using SSL/TLS by now. Mozilla has a nice SSL configuration generator; this is the configuration I'm using. The newest protocol Dragon NaturallySpeaking 12 claims it supports is TLSv1, so the "modern" configuration likely won't work.

My configuration follows. Authentication is however you want to set it up; I use digest auth behind SSL/TLS. Obviously, replace my file paths as appropriate. The Dragon NaturallySpeaking WebDAV client configuration includes options to follow redirects, but they don't work properly and aren't compatible with connection keep-alive. Thankfully, Apache has a workaround for such brokenness (redirect-carefully). The client expects infinite-depth requests to work, hence DavDepthInfinity on.

DavLockDB /var/www/sabi.net/webdav/dav_lock.db
<Directory /var/www/sabi.net/public/dragon>
        Dav On
        DavDepthInfinity on
        AuthType Digest
        AuthName dragon
        AuthUserFile /var/www/sabi.net/etc/digest.passwd
        Require valid-user
        SSLRequireSSL
        # Redirects don't work. At all.                                         
        BrowserMatch "Nuance component" redirect-carefully
        RewriteEngine off
</Directory>

Make sure the directory is writable by the Web server user; mine looks like this:

drwxrwsr-x 4 nriley www-nriley 4.0K Apr 23 11:49 /var/www/sabi.net/public/dragon/

Setting up the WebDAV client

Documentation is here. Follow the instructions under Enable the Roaming User Profile feature and Set location of Master Roaming User Profiles.

In HTTP Settings, specify your username, password and an Authentication Type as appropriate. Under Connection, click Never for Follow Redirects and check the Keep Connection Alive box. I didn't change the Timeouts from the defaults.

My SSL Settings are as follows:

SSL settings.png

I haven't actually tested if my server certificate is verified, but I do know enough not to check Using OpenSSL in an application that hasn't been updated in years.

Click Test Connection. If it fails, check your Apache logs; client-side feedback ranges from unhelpful to misleading. You'll notice that every single request is initially tried unauthenticated — I couldn't figure out a way to stop this from happening. Once I was confident that authentication was working, I filtered out these duplicate requests. Here’s the whole test:

% tail -fn 0 /var/www/sabi.net/logs/ssl.*~*.gz | grep nriley
nriley [23/Apr/2016:19:18:15 +0000] "PROPFIND /dragon HTTP/1.1" 207 1210 "-" "Nuance component"
nriley [23/Apr/2016:19:18:15 +0000] "DELETE /dragon/tst.tmp HTTP/1.1" 404 522 "-" "Nuance component"
nriley [23/Apr/2016:19:18:15 +0000] "PUT /dragon/tst.tmp HTTP/1.1" 201 442 "-" "Nuance component"
nriley [23/Apr/2016:19:18:15 +0000] "DELETE /dragon/TempDir HTTP/1.1" 404 522 "-" "Nuance component"
nriley [23/Apr/2016:19:18:16 +0000] "MKCOL /dragon/TempDir HTTP/1.1" 201 442 "-" "Nuance component"
nriley [23/Apr/2016:19:18:16 +0000] "DELETE /dragon/TempDir/tst1.tmp HTTP/1.1" 404 522 "-" "Nuance component"
nriley [23/Apr/2016:19:18:16 +0000] "PROPFIND /dragon HTTP/1.1" 207 6554 "-" "Nuance component"
nriley [23/Apr/2016:19:18:16 +0000] "PUT /dragon/TempDir/tst1.tmp HTTP/1.1" 201 458 "-" "Nuance component"
nriley [23/Apr/2016:19:18:16 +0000] "DELETE /dragon/TempDir/tst2.tmp HTTP/1.1" 404 522 "-" "Nuance component"
nriley [23/Apr/2016:19:18:16 +0000] "PROPFIND /dragon HTTP/1.1" 207 6554 "-" "Nuance component"
nriley [23/Apr/2016:19:18:16 +0000] "PUT /dragon/TempDir/tst2.tmp HTTP/1.1" 201 458 "-" "Nuance component"
nriley [23/Apr/2016:19:18:17 +0000] "PROPFIND /dragon/TempDir HTTP/1.1" 207 2858 "-" "Nuance component"
nriley [23/Apr/2016:19:18:17 +0000] "GET /dragon/TempDir/tst1.tmp HTTP/1.1" 200 341 "-" "Nuance component"
nriley [23/Apr/2016:19:18:17 +0000] "PROPFIND /dragon/TempDir/ HTTP/1.1" 207 1162 "-" "Nuance component"
nriley [23/Apr/2016:19:18:17 +0000] "MOVE /dragon/TempDir/tst1.tmp HTTP/1.1" 201 458 "-" "Nuance component"
nriley [23/Apr/2016:19:18:17 +0000] "MOVE /dragon/TempDir/ HTTP/1.1" 201 442 "-" "Nuance component"
nriley [23/Apr/2016:19:18:17 +0000] "COPY /dragon/newTempDir HTTP/1.1" 201 442 "-" "Nuance component"
nriley [23/Apr/2016:19:18:18 +0000] "DELETE /dragon/tst.tmp HTTP/1.1" 204 261 "-" "Nuance component"
nriley [23/Apr/2016:19:18:18 +0000] "DELETE /dragon/newTempDir HTTP/1.1" 204 293 "-" "Nuance component"
nriley [23/Apr/2016:19:18:18 +0000] "DELETE /dragon/newTempDir2 HTTP/1.1" 204 293 "-" "Nuance component"

Roaming options

Nuance documentation is here and does a reasonably good job of explaining the options; I recommend you read it prior to my comments below. Here's how I have the roaming Administrative Settings configured:
Administrative Settings.png

If you’re going to be the only user, check Display Classic Open User Profiles dialog. This displays a flat versus a hierarchical list of users and dictation sources. Every time you click on anything in this dialog, be prepared for a long synchronous wait for server access. By disabling the hierarchy, you eliminate the wait while expanding your user. (If you only have one user and dictation source, you may not see this dialog at all.)

Allow non-Roaming User Profiles to be opened will need to be checked while you are migrating your user profile to a roaming profile, but can be unchecked afterward.

Merge contents of vocdelta.dat into network User Profile when file is full involves a 500K file; in a WAN environment with reasonably fast links, latency is likely to outweigh any time savings, so I kept this checked.

I unchecked Access network at User Profile open/close only because I keep my profiles open for days at a time and have an Internet connection available at all times. If your usage pattern is different, you may select otherwise.

Despite documentation suggesting that Ask before breaking locks on network User Profiles does not apply to profiles accessed through HTTP, I was asked to break a lock nearly every time I opened my profile until I unchecked it. There might be some server configuration that will let this be checked, but I’m unaware of it.

Always copy acoustic information to network and Conserve archive size on network are somewhat related. How you decide to limit/copy acoustic information really depends on your network performance, patience and desired strategy for propagating corrections and optimizing your profile.

Converting your profile

Again, there's official documentation which I won't repeat. There's no progress bar, just an unresponsive interface during migration; watch the server logs or your favorite network monitoring utility if you get nervous.

If you’ve been using Dragon NaturallySpeaking for some time, you may think of your profile as a large, unwieldy multi-gigabyte entity. Much of this is backups and audio data that aren’t strictly necessary — and you’ll notice that the server profile is much smaller because it omits them. My local profiles (compressed!) on two machines prior to migration were 1.4 and 1.1 GB; corresponding sizes on the server are 437 and 430 MB. ~320 MB of each is (primarily) audio in the voice_container subdirectory.

Once you're comfortable your roaming profile works, don't forget to delete your local profile(s).

Pitfalls

Much of the information here is out of date but an important and still-relevant sentence is "When using a roaming user profile, backup files cannot be generated in any location". The downside of backups not being written to the roaming profile is that if your profile becomes corrupted (which just happened for me today — I set up Dragon Medical Practice Edition on a new Windows 10 installation and subsequently DMPE crashed every time I opened the profile from my Windows 7 VMs) you’ll have to rely on your server backups. If you don’t have server backups — go fix that.

The Language and Acoustic Optimizers don't run on a roaming profile; they idea is that you run them server-side. I plan on seeing how well they work on a fast network by remotely mounting the WebDAV share, but haven't had a chance to do this yet.

Dragon NaturallySpeaking startup and shutdown obviously takes longer when the network is involved. You can automate opening a profile with a command-line argument to natspeak.exe, but you can't specify a dictation source (if you have more than one) without relying on AutoHotKey or similar. Thanks to various VMware Fusion and/or OS X bugs I already have to babysit dictation startup, so one more click to select a profile hasn't been a great additional hardship.

For more

My other dictation-related blog posts are in the Dictation category, if you're interested. Right now all my dictation effort is targeted at prose, but at some point I plan to investigate VoiceCode — which is currently in the process of being rewritten.

The Dash, heart rate sharing and naming: 1.4 update

I have been successfully using The Dash on a regular basis for iOS heart rate monitoring since my post a few months ago. The Dash software 1.4 came out last week and fixes many of the issues I addressed, as well as some unrelated ones I didn't mention. So, here’s an update.

1.4 adds support for broadcasting heart rate to more apps, including Wahoo Fitness, which was confused by 1.3 but works great now.

The audible message when you tap and hold the left Dash is now clearer that this gesture is not only for pairing but for starting Bluetooth LE communication including heart rate tracking.

There’s now audible feedback when connecting from the right Dash alone. Normally, the Dash connects automatically once they’re both in your ears, but you can use just the right Dash by double-tapping it once it's in your ear.

It's now possible to rename your Dash. Unfortunately the left and right sides get the same name, so it remains difficult to distinguish between them. The app implementation is a bit flaky. Sometimes the UI doesn’t immediately reflect name changes. The iOS UI doesn't let you edit the text field directly by tapping on the name, but keyboard navigation via Force Touch (iPhone) or 2-finger touch (iPad) works.

It turns out only the Bluetooth LE side will update the name of an already-paired device. You can use this “feature” to create a different name for each side:

  • Unpair both Dash from your phone.
  • Put both Dash in your ears.
  • Open the Bragi app.
  • Tap the left Dash and pair it.
  • In Settings, rename it to “XXX Dash R” or similar.
  • Eject the Dash from the app. Make sure the new name shows up on the “Select Device” screen (it probably won’t); if it doesn't, try connecting and ejecting again.
  • Pair the right Dash to your iPhone. If it still shows the old name, put the Dash back in the charging case briefly then try again.
  • Open the Bragi app.
  • Connect the left Dash, go into Settings, rename it to “XXX Dash L” or similar.
  • Eject the Dash from the app. Make sure the new name shows up on the “Select Device” screen; if it doesn't, try connecting and ejecting again; if it still doesn't work, connect, rename again and eject.

The result is just what you’d expect:
IMG_2029.PNG

1.4 is the first version in which microphone recording quality is adequate for Siri and dictation — a huge improvement. (Listen to a comparison another user recorded — and I've heard 1.3 much worse than that.)  Sporadic disconnections in the presence of seemingly-adequate signal strength seem dramatically reduced once again; I've not had a single disconnection since upgrading.

My biggest remaining Dash annoyances have to do with multiple audio devices. No version has had multipoint support, nor has this ever been promised, but worse is that you can’t initiate a disconnection or pairing from the right Dash once it's connected (however…)

A final tip, if you're still having issues with touch gesture reliability: Use the pad of your finger (like you would with Touch ID), not your fingertip as you would with a touch screen. With this change, my taps and swipes have improved from perhaps a 50% to 90% success rate.

The Dash and iOS heart rate sharing via HealthKit

The Dash is a Kickstarter-backed (though having since raised many times its crowdfunding total) Bluetooth headset with no wires between the earpieces. Features include a heart rate monitor, accelerometers and built-in music player. It’s waterproof and will eventually be programmable. Given my prior interest in Bluetooth headsets you might guess I was a backer, and you’d be correct.

I do physically have and have been using the headset for about a week now, but it’s clearly not done yet. So despite its 1.3-as-of-today release (details — check out the known issues list), and as we’re still in the Kickstarter-fulfillment period and not even into pre-orders yet, I don’t feel comfortable giving the headset a full review. The company and particularly its CEO Nikolaj Hviid have gone above and beyond in transparent communication throughout its gestation and now into fulfillment. This in no way is necessary or sufficient to produce a good product, but it does cause me to give them a lot more slack than I otherwise would.

In the meantime, for others with The Dash in its current state, here’s a tip.

The left Dash is the one with the heart rate sensor in it. The instructions when you tap and hold for a few seconds on the left Dash tell you to pair it with the Bragi app, but it’s also possible to pair it with iOS directly and then share the resultant heart rate with apps via HealthKit. Theoretically it should also pair with any app that is expecting a Bluetooth LE heart rate sensor (and I imagine this will eventually be true in practice as well), but I’ve not had any luck doing this. For example, it shows up in Wahoo Fitness but just sits there connecting forever:

IMG_1967.png

While it might make more sense to add the device from Health > Sources, you actually pair it with iOS in Settings > Bluetooth. Note that both the left and right Dash show up as “The Dash” in the Bluetooth settings list. You can’t distinguish them when disconnected, but the left Dash doesn’t have an “i” button to the right of it when it’s connected.

Once you’ve paired your left Dash, you should see it in Health > Sources:

IMG_1981.PNG

I didn’t have to explicitly turn on sharing my heart rate, but if for some reason you want to turn it off, you can by tapping on The Dash:

IMG_1980.PNG

Finally, you need an app that can read your heart rate from HealthKit. I use Cyclemeter, which with the Elite upgrade has this feature:

IMG_1982.PNG

You can see how this doesn’t preclude using another Bluetooth LE device, in my case a speed/cadence sensor.

You don’t need to start an activity on the left Dash to start sending the heart rate to your iPhone. Instead, once you’ve paired it once in Settings, tap and hold the left Dash for a few seconds until you hear a repeating tone, and it should connect to your iPhone. If you don’t enjoy the sound feedback the 30th time, just tap the left Dash once to stop it — it’ll still pair to your iPhone.

Note that you can have the right Dash paired to another device (in my case, the iPad on which I’m watching a video) and the left Dash will still act as a left earphone; the two sides communicate over NFMI rather than Bluetooth.

Finally, after adding heart rate info to my indoor cycling page, I get:

IMG_1983.PNG

Yes, I’m rather out of shape — working on that.

SGI screen fonts converted for OS X

The first Unix machines I sat at on a daily basis were the SGI Indys in the Berry Patch. IRIX’s proportional system font was Adobe Helvetica Oblique, which was nice; but its monospaced font was unique and home-grown, simply called screen. Here are a couple of screenshots I took in that environment nearly 20 years ago.

Screen remains the most readable monospaced bitmapped font I've ever used. It’s available in regular and bold weights, and a wide range of sizes: 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 and 18 point. While I mostly use it in the 11 point size, the smaller sizes are terrific for fitting a bunch of log output in the corner of your screen.

After switching my desktop from Mac OS/Linux to Mac OS X in 2001, I initially used Monaco in both aliased and antialiased variants, but missed screen. I continued using screen in X11, running applications on the SGI O2 I then had on my desk, displaying remotely on my Power Mac G4.

In 2003 I used PfaEdit, now FontForge, to convert screen to a TrueType font so it’d work on OS X, and I have used it as my standard bitmapped font since. I would have made the conversions public earlier, but I was concerned about whether this would be a licensing violation. It turns out the SGI fonts were released under a MIT license a few months after I initially converted them back in 2003, but I didn’t notice until today. So, here are the fonts for you to download:

You may notice that these fonts look awful — with inconsistent horizontal and sometimes vertical spacing, even clipping — whenever you try to use them. Recent versions of OS X have been less kind to bitmapped fonts; here are some tips.

In Terminal, you can compensate for the font being squashed horizontally by adjusting the Character Spacing:

Screen character spacing

The result:

Terminal screen

In the Emacs Mac port, you can disable antialiasing and ensure screen font metrics are used on a per-font basis. Here’s how I use Screen 11 if it’s installed, otherwise Menlo 12.

(cond ((eq window-system 'mac)
       (cond ((x-list-fonts "Screen")
	      (create-fontset-from-ascii-font "Screen-11" nil "mac")
	      (set-fontset-font "fontset-mac" 'latin
				"Screen-11:antialias=off:destination=1"))
	     (t
	      (create-fontset-from-ascii-font "Menlo-12" nil "mac")
	      (set-fontset-font "fontset-mac" 'latin
				"Menlo-12")))
       (setq default-frame-alist '((font . "fontset-mac")
				   (width . 80) (height . 80)
				   (background-color . "ghostwhite")))
       (setq-default line-spacing 1) ; extra spacing
[...]

What you get:

Emacs screen

In 2008 I built a demo app to demonstrate the various issues OS X had rendering this font, but I never actually filed any bugs. As long as I’m sharing the fonts I might as well share the app (source, binary). It uses a boatload of deprecated/removed API like QuickDraw and ATSUI, mostly to demonstrate how newer font APIs, such as the then-new CoreText, are worse at displaying bitmapped fonts than their older counterparts. You can click the checkboxes at right to see options you can use with the various APIs to try to fix the spacing:

Text app

Most Cocoa apps used to display the font without difficulty, but this changed in OS X 10.8 and later, which no longer perform screen font substitution by default. You can fix the font’s display by forcing the old behavior with NSFontDefaultScreenFontSubstitutionEnabled or NSLayoutManager.usesScreenFonts (which is deprecated in 10.11). These are discussed in the AppKit release notes (there’s no direct link but if you scroll up a little from the linked section you'll see it).

Bitmapped fonts are much less useful on a Retina display. A 5K iMac or equivalent is likely in my future when I replace my Mac mini, but not for a year or two as I just bought its current display this year. In any case, I may be posting this just as it’s about to become obsolete. Better late than never?

‹ Newer Posts  •  Older Posts ›