When I first started working on Where’s it Up API, I struggled with its pricing model. What should I charge? How should I organize customers? How much should I charge each group? etc. My friend Sean asked me if the tests people were running cost me anything. I replied that they didn’t; they were merely a rounding error in my network traffic. He suggested a flat-rate-per-month model. I made up a few plans, at reasonable price points in the $20-$60/month range, and launched. We’ve enjoyed a reasonable amount of revenue using this model.
Quite a bit has changed since then:
- Our network is larger
- The backend code has been completely refactored (moving from thousands of requests per day to millions will do that to you)
- We’ve managed to acquire a bunch of great customers.
Two other major adjustments have changed how I look at Where’s it Up:
- We have customers running more tests, faster, than I ever imagined. Where’s it Up users now create enough network traffic to represent a real number of dollars.
- We’ve expanded the number and type of tests we’re running, such that some are orders of magnitude more bandwidth- and CPU-intensive than anything we considered at launch. Compare: generating a screenshot of a web page, compressing it, and uploading it to AWS S3 (our shot test, powering ShotSherpa), to a single DNS lookup.
I tried to make the old pricing model work. We tiered some of our job types, limiting them to customers on even more expensive plans. Then we capped our plans at a total number of tests per month. Ultimately, the flat-rate model had two problems: customers couldn’t use some of the test types unless they were giving us $200/month, and running a million screenshots from our expensive Alaska server would cost us more than $200.
Our new pricing model is to sell credits, and charge a different number of credits for different test types. Running a quick DNS costs one credit, whereas a screenshot costs ten. This model allows us to offer every test type to every user, and to bill users more accurately for their usage.
Rather than giving every user one free month when they sign up, we’ll give them 10,000 free credits (at least until my accountant finds out). This is enough to:
- Confirm that one HTTP endpoint is accessible 10,000 times; or
- Confirm that one HTTP endpoint is accessible on every continent, every hour, for 35 days; or
- Check DNS results for your domain from 70 countries, daily, for the next 142 days; or
- Take a screenshot of your website every morning for the next 2.74 years
It’s too soon to call this a success, but I’m more happy with the credits model than I have been with anything else we tried or considered. It allows every customer to execute every job type, while ensuring we don’t sell more than we can support.
We just ran into an issue joining a new server to our MongoDB replica set powering Where’s it Up & ShotSherpa. We copied the data over to give it a good starting point, then added it to the replica set. We waited a while, it still hadn’t joined, looking at the tail of the log showed many connection lines, nothing telling. When I restarted MongoDB I saw:replSet info self not present in the repl set configuration
I tried pinging the server’s hostname to ensure it was successful. No problem. I tried connecting to the server using the mongo command line on the primary server (`mongo server.example.com:27017`). No problem.…
Eventually I copied the mongo hostname from the replica set configuration, and tried to use it to connect to mongo on the new secondary. No dice! As it turns out the NAT hadn’t been configured to allow that hostname to work locally. A few new entries in our /etc/hosts file later, and our server was joined successfully.
When MongoDB has trouble connecting, try the connection strings listed in the replica set config from all relevant hosts.
I’ve made plenty of mistakes in the code powering WonderProxy, perhaps most famously equating 55 with ∞ for our higher-level accounts (issues with unsigned tinyint playing a close second). Something I think I got right though, was the concept of “managed accounts”. It’s a simple boolean flag on contracts, and when it’s set, the regular account de-activation code is skipped.
Having this flag allows us to handle a few things gracefully:
- Large value contracts
By marking them as managed, they don’t expire because someone was on vacation when the account expired. They stay happy, the revenue continues, the expiry date remains accurate.
- Contracts with tricky billing processes
The majority of our contracts pay us with PayPal or Stripe. A selection of contracts however have complex hoops involving anti-bribery policies, supplier agreements etc. This gives us some time to get this ironed out.
- Contracts where we’ve failed to bill well
We occasionally make mistakes when billing our clients. When we’ve screwed up in the past, this helps ensure that there’s time for everything to resolve amicably.
This doesn’t mean the system has been without flaw. We currently get daily emails reporting on new signups, expired accounts, etc. It mentions all accounts that were not expired because of the managed flag.
Like most features, that was added after mistakes were made: we’d left some managed accounts unpaid for months. With better reporting, though, now, we couldn’t be happier with it.
We've made significant updates to the infrastructure supporting Where's it up this year. Many of these changes were necessitated by some quick growth from a few thousand tests per day to several million. Frankly without the first two, I'm not sure we could have remained stable much past a million tests per day.
I blogged about our switch to MongoDB for Where’s it Up a while back, and we’ve been pretty happy with it. When designing our schema, I embraced the “schema free world” and stored all of the results inside a single document. This is very effective when the whole document is created at once, but it can be problematic when the document will require frequent updates. Our document process was something like this:
- User asks Where’s it up something like: Test google.com using HTTP, Trace, DNS, from Toronto, Cairo, London, Berlin.
- A bare bones Mongo document is created identifying the user, the URI, tests, and cities.
- One gearman job is submitted for each City-Test pair against the given URI. In this instance, that’s 12 separate jobs.
- Gearman workers pick up the work, perform the work, and update the document created earlier using $set
This is very efficient for reads, but that doesn’t match our normal usage: users submit work, and poll for responses until the work is complete. Once all the data is available, they stop. We’ve optimized for reads in a write heavy system.
The situation for writes is far from optimal. When Mongo creates a document, under exact fit allocation, it considers the size of the data being provided, and applies a padding factor. The padding factor maxes out at 1.99x. Because our original document is very small, it’s essentially guaranteed to grow by more than 2x, probably with the first Traceroute result. As our workers finish, each attempting to add additional data into the document, it will need to be moved, and moved again. MongoDB stores records contiguously on disk, so every time it needs to grow the document it must read it off disk, and drop it somewhere elsewhere. Clearly a waste of I/O operations.
It’s likely that Power of 2 sized allocations would better suit our storage needs, but they wouldn’t erase the need for moving entirely, just reduce the number of times it’s done.Solution:
Normalize, slightly. Our new structure has us storing the document framework in the results collection, and each individual work result from the gearman worker is stored in results_details in its own document. This is much worse for us on read: we need to pull in the parent document, then the child documents. On write, we’re saved the horrible moves we were facing previously. Again, the usage we’re actually seeing is: Submit some work, poll until done, read complete dataset, never read again. So this is much better overall.
We had some slight work to manage to make our front end handle both versions, but this change has been very smooth. We continue to enjoy how MongoDB handles replica sets, and automatic failover.
We’ve recently acquired a new client which has drastically increased the number of tasks we need to complete throughout the day: checking a series of URLs using several tests, from every location we have. This forced us to re-examine how we’re managing our gearman workers to improve performance. Our old system:
- Supervisor runs a series of workers, no more than ~204. (Supervisor’s use of python’s select() limits it to 1024 file descriptors, which is allows for ~204 workers)
- Each worker, written in PHP, connects to gearman stating it’s capable of completing all of our work types
- When work arrives, the worker runs a shell command that executes the task on a remote server using a persistent ssh tunnel. Waits for the results, then shoves them into MongoDB.
This gave us a few problems:
- Fewer workers than we’d like, no ability to expand
- High memory overhead for each worker
- The PHP process spends ~99.9% of its time waiting, either for a new job to come in, or for the shell command it executed to complete.
- High load with 1 PHP process per executing job (that actually does work)
We examined a series of options to replace this, writing a job manager as a threaded Java application was seriously considered. It was eventually shot down due to the complexities of maintaining another set of packages, and due to the reduced number of employees who could help maintain it. Brian L Moon’s Gearman Manager was another option, but it left us running a lot of PHP we weren’t using. We could strip down PHP to make it smaller, but it wouldn’t solve all our problems.
Minimizing the size of PHP is pretty easy. Your average PHP install probably includes many extensions you’re not using on every request, doubly so if you’re looking at what’s required by your gearman workers. Look at ./configure --disable-all as a start.Solution:
Our solution involves some lesser-used functions in PHP: proc_open, stream_select, and occasionally posix_kill.
We set Gearman to work in a non-blocking fashion. This allows us to poll and check for available work without blocking until it becomes available.
Our main loop is pretty basic:
- Check to see if new work is available, and if so start it up.
- Poll currently executing work to see if there’s data available, if so record it.
- Check for defunct processes that have been executing for too long, kill those.
- Kill the worker if it's been running too long.
Check for available work
Available work is fired off using proc_open() (our work tends to look something like this: sudo -u lilypad /etc/wheresitup/shell/whereisitup-trace.sh seattle wonderproxy.com). We save the read pipe for later access, and set it to non-blocking. We only allow each non-blocking worker to execute 40 jobs concurrently, though we plan to increase the limit after more testing.
Poll currently executing work for new data
Check for available data using stream_select() using a very low poll time. Some of our tests, like dig or host, tend to return the data in one large chunk; others, like traceroute, return data slowly over time. Longer running processes (traceroute seems to average 30 seconds) will have their data accumulated over time and appended together. If the read pipe is done (checked with feof()) close them off, and close the process (proc_close)
Check for defunct processes
We track the last successful read from each process, if it was more than 30 seconds ago we close all the pipes then recursively kill the process tree.
Kill the worker
Finally, the system keeps track of how much work it’s done. Once it hits a preset limit, it stops executing new jobs. Once its current work is done, it dies. Supervisor automatically starts a new worker.
Our original site wasn’t actually hugely problematic. We had used a large framework, which often left me feeling frustrated. I’ve lost days fighting with it, on everything from better managing 404s to handling other authentication methods. It was the former that lead to me rage-quitting the framework and installing Bullet. That said, we’re not afraid to pull in a library to solve specific problems, we’ve been happily using Zend_Mail for a long time.Solution:
We’ve re-written our site to use the Bullet framework. Apart from appreciating micro frameworks in generally, they’re particularly apt for delivering APIs. We’ve had great success writing tests under Bullet, and being able to navigate code in minutes rather than hours is an everlasting bonus. This leaves the code base heavily weighted to business logic, rather than a monolithic framework.
Yesterday either the city or Toronto Hydro went through and replaced the streetlights in my neighbourhood. This seemed good, keep the lights working. Then night fell, and while walking to bed I wondered why there was a car with xenon headlights on my front lawn pointing its headlights at my bathroom. That was the only logical reason I could think of for my bathroom to be so bright. It wasn't a car. It was the new streetlights. My entire street now looks like a sports field.
Here is Palmerston, a nice street with pretty street lights (much prettier than ours):
Here is Markham, with the sports field effect:
I took those two pictures on the same camera, with the same settings. I exported them both using the same white balance value. Full resolution options are posted at smugmug. I have emailed Toronto 311, who has forwarded me on to Toronto Hydro. In the meantime I'm going to need upgraded blinds, and to start wearing my sunglasses at night.
We recently expanded the number of disks for the raid on the main server handling Where’s it Up requests. Rebuilding that array took roughly 28 hours, followed by background indexing which took another 16 hours.
During the rebuild, the raid controller was doing its best to monopolize all the I/O operations. This left the various systems hosted on that server in a very constrained I/O state, iowait crested over 50% for many of them, while load breached 260 on a four core vm. Fun times.
To help reduce the strain we shut down all un-needed virtual machines, and demoted the local Mongo instance to secondary. Our goal here was to reduce the write load on the constrained machine. This broke the experience for our users on wheresitup.com.
We’ve got the PHP driver configured with a read preference of MongoClient::RP_NEAREST. This normally isn’t a problem, we’re okay with some slightly stale results, they’ll be updated in a moment. Problems can occur if the nearest member of the replica set doesn’t have a record at all when the user asks. This doesn’t occur during normal operations as there’s a delay between the user making the request, and being redirected to the results page that would require them.
Last night, with the local Mongo instance so backed up with IO operations, it was taking seconds not ms for the record to show up.
We shut that member of the replica set off completely, and everything was awesome. Well, apart from the 260 load.
Back in the fall of 2010 I was kicked out of my apartment for a few hours by my fantastic cleaning lady, and I wandered the streets of Montréal on Cloud 9. Not because of the cleaning mind you, but because I’d kissed Allison for the first time the night before, while watching a movie on my couch (Zombieland). Great night. My wanderings took me to the flagship Hudson’s Bay store in Montreal, and inevitably to the electronics department. I decided to check out the 3D televisions.
The first problem I ran into with the 3D TVs was the glasses, they didn’t work. I tried on the first pair (which had an ugly, heavy, security cable attaching them to a podium), no dice. I tried on the second pair, no luck there either. Looking at the glasses in more detail, I found a power button! Pushed, tried them on, nothing. Repeated the process on the other pair, still nothing. There I was, big electronics department, with a range of 3D TVs in front of me, and the glasses didn’t work.
If I was going to describe the target market for 3d televisions in 2010, I might have included a picture of myself. Male, 30, into technology, owns a variety of different entertainment products and consoles, decent disposable income. As far as I could tell I represented the exact person they were hoping would be the early adopters on these.
While wandering off I finally encountered a sales person, I mentioned that the glasses for the TVs didn’t work. He told me they were fine, and motioned for me to follow him. It turned out that you had to push and hold the power button down for a few seconds in order to turn them on. As I put them on the sales person walked away, and I got to enjoy a demo video in 3D.
Well, sort of. First the glasses had to sync with the television, then I was all set, great demo video on directly in front of me. Of course, I wasn’t in a room with a single television, there was several along the wall to the right and left of the central television, and since my glasses had sync’d with the one directly in front of me (not the others), the other televisions had essentially been turned into strobe lights. Incessantly blinking at me. When I turned my head towards the blinking lights the glasses re-sync’d with a different television, a disorienting procedure that allowed me to view it properly, but turned the one directly in front of me into a strobe light.
So, after requiring aid to put on a pair of glasses that were practically chained down, I was being forced to view very expensive televisions adjacent to a series of strobing pictures with an absentee salesman.
Despite all of the issues, this was really cool! This was 3D, for my living room! No red-blue glasses either, this was the real thing! Maybe I could get one, then invite Allison over again for a 3D MOVIE! Clearly owning impressive technology was the way to any woman’s heart. While those thoughts were racing through my mind I caught my own reflection in a mirror, the glasses were not pretty. If you picture a pair of 3d glasses in your head right now, you’re probably imagining the modern polarized set you get at the movie theatre. Designed to fit anyone, over prescription glasses, rather ugly so people don’t steal them. Those glasses were years away back in 2010, and these were active shutter glasses. Rather than just two panes of polarized plastic, they each lens was a small LCD panel capable of going from transparent to opaque and back many times a second. They looked a little like this, but in black:
As I put the glasses back on the presentation pedestal and rubbed my sore nose I realized: There was absolutely no way I could try to kiss a girl for the first time wearing a pair of those. I left the TVs behind, I think I picked up some apples I could slice and flambé to serve over ice cream instead.
The kissability test: When considering a product, could you imagine kissing someone you care about for the first time while using it.
Stuart McLean is a fantastic story teller. I’ve enjoyed his books immensely, but most of all, I’ve enjoyed listening to him tell me his stories on the radio, either directly through CBC Radio 2, or through the podcast made of the show. This past Christmas, I was gifted tickets to see and hear him live here in Toronto, a wonderful time.
When I listen to him on the radio, as his calming voice meanders through the lives in his stories, I often picture him. In my mind, he’s sitting in a library on overstuffed leather chair, with a tweed coat laid over the arm, a side table with a lamp and glass of water beside him… perhaps a large breed dog dozing at his feet. As each page came to an end he leisurely grasps the corner, sliding his fingers under the page, gently turning it, just as an avid reader moving through a treasured novel. This is the man I picture when I hear the measured voice regaling me with tales from the mundane to the fantastic.
This could not be further from the truth.
The man I saw at the Sony Centre for the Performing Arts has nothing in common with the man I pictured but the voice. As his story began, as he lapsed into those measured tones, his feet never stopped moving. He danced around the microphone like a boxer, stepping closer to make a point, jumping to the side in excitement, waving his arms in exclaim, always ready to strike us with another adverb. When he reached the bottom of each page, he’d frantically reach forward and throw it back, as eager as a child on Christmas morning. It’s easy to fall under the spell of a great storyteller, to stop seeing and only listen, but his fantastically animated demeanour shook that away, and spiced the story in ways I couldn’t have imagined over years of radio listening.
Listen to his podcast for a while, then go see him live, he’s an utter delight.
I’ve had a few valentines days over the years. I’ve spent far too much money, I’ve planned in exacting detail, I’ve left things until the last minute, and I’ve spent a fair few alone. This year, I wanted something special.
Just after Christmas Allison was kind enough to give me a hand knit sweater she’d been working on for over a year. It’s fantastic. Around the same time she’d commented that she was jealous of my neck warmer. An idea stuck! I’d knit her a neck warmer!
Small problem: I’ve never knit a thing in my life.
I’ve never let not knowing how to do something stop me before, and this didn’t seem like the time to start. I headed up to Ewe Knit, where Caroline was able to administer a private lesson. First I had to use a swift to turn my skien of yarn into a ball. They appear to sell yarn in a useless format to necessitate the purchase of swifts, a good gig if you can get it.
Once I had my nice ball of yarn I “casted on” a process I promptly forgot how to accomplish. The process mostly consisted of looping things around one of my knitting needles the prescribed number of times. That number was 17 according to the pattern. Once I’d casted on the regular knitting started, each row involved an arcane process where I attached a new loop to the loop on the previous row. The first few rows were quite terrifying, but eventually I slipped into a rhythm, and was quite happy with my progress by the time I’d made it to the picture shown below.
Just a few rows later I made a terrifying discovery: I’d invented a new form of knitting. Rather than knitting a boring rectangle, I was knitting a trapezoid, and there was a hole in it. My 17 stitch pattern was now more like 27. There was nothing for it but to pull it out and basically start over.
Several hours, and many episodes of The Office later, I’d slipped into a great rhythm, and developed a mild compulsion to count after every row to ensure I had 15 stitches. The neck warmer was looking great! Just another season of The Office, and some serious “help” from the cat, and I’d be finishing up.
I headed back to Ewe Knit for instructions on casting off, where I tied off the loops I’d been hooking into with each row. Then I sewed the ends together, and wove in my lose ends. A neck warmer was born!
This felt like a success before I’d even wrapped it. I’d spent a lot of time working on something she’d value, I’d learned more about one of her hobbies, and gained new appreciation for the sweater she’d knit for me. She just needed to unwrap it.
She loved it.
WonderProxy will be announcing availability of a new server in Uganda any day now. We’re very excited. When we first launched WonderProxy the concept of having a server anywhere in Africa seemed far-fetched. Uganda is shaping up to be our fifth.
Our provider asked us to pay them by wire transfer, so I dutifully walked to the bank, stood in line, then paid $40CAD in fees & a horrible exchange (to USD) rate to send them money. Not a great day, so I grabbed a burrito on the way home. A few days later we were informed that some intermediary bank had skimmed $20USD off our wire transfer, so our payment was $20USD short. Swell.
In order to send them $20USD, I’d need to go back to the bank, stand in line, hope I got a teller who knew how to do wire transfers (the first guy didn’t), buy $20USD for the provider, $20USD for the intermediate bank, and pay $40CAD for the privilege. $80 to send them $20. Super.
Luckily XE came in to save the day again. Using their convenient online interface I was able to transfer $40USD for only $63CAD, including my wire fee. I paid a much better exchange rate, lower wire fees, and didn’t have to put pants on. The only downside was a lack of burrito. Bummer.
If you’re dealing with multiple currencies and multiple countries, and these days it’s incredibly likely that you are, I’d highly recommend XE.
At WonderProxy we’ve been helping people test their GeoIP sensitive applications since launch. It’s why we launched. Perhaps ironically it’s never been a technology we’ve used on our own website. With our upcoming re-design that’s changing.
- How to use it
- Acquire a database
- Rebuild Apache & PHP
- Handle edge cases
Deciding how you’ll use GeoIP information will affect how much you end up spending for a GeoIP database, how often you’ll renew, and what safeguards you’ll need to put in place. Country level granularity is relatively easy to come by, city level within the US and Canada however tends to be much more expensive.Our integration goal is to support a nice slogan, for us that’s “You’re in .., Your customers aren’t. WonderProxy: Test Globally”. We opted for Region level as opposed to City, or Country level. We felt like “Ontario” or “Texas” was more impressive than “Canada” and “United States”, but were also wary of the lower accuracy level with city rate (Telling someone they’re in Brooklyn, when they’re really in Manhatten wouldn’t inspire confidence).
There’s several options available. This step was easy, we bought ours from MaxMind. We feel relatively familiar with the GeoIP data provider marketplace, and MaxMind has seemed both quite accurate and responsive to updates throughout WonderProxy’s existence. IP2Location is another provider with downloads available.
MaxMind also provides API access to its data. We’ve been leveraging this for a long time in our monitoring systems (we check all our servers to ensure they’re geo-locating correctly), but they’re all batched processes. Waiting for a remote API to return during page load, in particular for a landing page is folly. IP2Location also offers an API, as does InfoSniper. APIs work really well in batched process, or anything somehow detached from page loads.
Our initial build only required the Apache module, this way additional superglobals were provided in PHP. I can grab <?=$_SERVER['GEOIP_REGION_NAME']; ?> get someone’s location, it’s really easy. We later installed the PHP module (using the same database) to support arbitrary IP lookup within our administration systems. We also encoded the MaxMind ISO 3166 data into our application to convert country codes to names.
If you’re taking the API approach life should be easy, there’s plenty of code examples for every major API provider. If you’re using an API you also have the ability to choose different levels of granularity on the fly, full data some of the time, minimal data most of the time to save on credits.
Not every IP will have a result, it’s important to catch these and handle correctly. We’ve simply decided to test the variable and replace with “Your Office” when the lookup fails.
On the API front It’s worth spending a few minutes to make a request fail on purpose and ensure your code handles it well. I’ve had a few important daily reports fail because the API we were using was unavailable, frankly it’s embarrassing.
We've been really happy with how easy the integration has been. I've already added several new integration points throughout our administrative system (providing lookups on banned users, the IP associated with transactions, etc.). For us the integration is really supporting the slogan and looking nice, but there's plenty of practical uses like estimating shipping charges, localizing prices, and adjusting content.
I decided to do this because I was finding our ticket system a bit overwhelming: pages of tickets all awaiting my attention. I had over 60 assigned tickets a week ago. Now I’m down to 39. As a small shop, and without a project manager (dedicated or otherwise), I was doing my best to prioritize tickets based on criteria like: customer impact, revenue generation, time saved, etc. Tickets that fared well in those categories tended to be large affairs, requiring a decent amount of effort. This left me with an intimidating, seemingly endless wall-of-work. Adding Date Opened to the view just made it depressing. The derby seemed like a great way to clear out the work and make the wall less intimidating.
I’m finding my open & assigned ticket screen manageable now. If your team has been working on big issues for a while, why not give them a few days to plow through some easy stuff? I awarded prizes for my derby, giving out chocolate in a few categories: oldest ticket closed, most tickets closed, most humorous commit message.
Now, if you’ll excuse me, I’ve got a bellyache.
For a long time at WonderProxy we neglected internal systems, instead directing our efforts to things used by our customers. We’ve built new products, launched redesigns, then a few more products, all the while maintaining user accounts by directly interacting with the database (including a few update queries lacking a WHERE clause).
This was a huge mistake
As I worked on the redesign for WonderProxy (Original vs Redesign) I added a few basic admin features almost by accident, and all our lives got remarkably easier. I added a few more, and things got easier still. Tasks that used to be a chore (like setting up a free trial) almost became fun. Researching account history is just a few easy clicks, with nice graphs using nvd3, and pretty data tables. Editing accounts in place, with code that understands 30GB = 32212254720 bytes.
Saying things “creating trial accounts is fun” may sound like gross exaggeration, but it’s not. I’m pretty happy with the code i’ve got there, which may add to it. The form supports pasting in an address like Paul Reinheimer <paul@preinheimer,com>, parsing it out to its component parts, and generating an available username. Then in for expiry, I leverage php’s strtotime() function so I can enter in something simple like “+2 weeks” or “next thursday” and have that parsed and work properly.
The speed at which we’re both willing and able to resolve requests has greatly increased. Trial accounts (which convert with great regularity) are easy to do in a minute, rather than 10, so we’re more likely to do them when they roll in, rather than waiting until we’re on the server for another reason. When it comes to customizing accounts, getting an accurate history, and being able to quickly modify accounts has helped everyone. I’m lacking a basis for comparison, but our revenue has also been climbing nicely since the change, having a dashboard to find clients exceeding their plan limits has certainly helped.
If you’re looking at the next big thing to improve for your team, I’d strongly suggest taking a harder look at your internal tools.
Since we launched WonderProxy we’ve had lots of bills to pay, often to ourselves or contractors. WonderProxy is a Canadian company, with a Canadian bank account, when we need to pay someone in the US we’ve had a few options: cheque, wire transfer, PayPal.
- Wire Transfer This requires me to go to the branch in person, wait in a line, pay a lot in fees, then the money shows up in a day or three, possibly with additional fees being deducted along the way.
- PayPal These are easy, open up our PayPal account, make a transfer, log out out again. The recipient ends up paying like 3.5% in fees, receives a mediocre at best exchange rate, then waits longer for the money to appear in their account.
These are easy, we open a filing cabinet, pull out a cheque, sign it and put it in the mail. Then the contractor waits for a week or so for the cheque to arrive, deposits in the bank, receives a horrible exchange rate, and waits two or more weeks for it to clear.
So, everything sucks, bad rates, fees, and possibly involving me going to a bank in person.
Eventually we got frustrated with the money we were effectively losing with crappy exchange rates, and took a look further. We came across XE's currency trading services. It took a fair amount of effort to sign up (various forms to be scanned and sent in), but it’s been fantastic. I log on to execute a trade, enter that I would like to buy USD with $1000CAD, and get a spot rate on the transfer. I choose to execute the trade, with the USD funds being deposited in the recipients US account. I pay XE through my bank’s online bill payment services, then about a week later the US funds arrive in the recipient’s account.
I get to do it all from my desk, we get a fantastic exchange rate, and the transfer service makes its money on the spread, so there’s no extra fees.
If you’re paying across borders, I would heartily recommend investigating XE to see if they can meet your needs.
I turned down tips when they came up, heck, I also turned down drinks. Though the occasional mother would rephrase her question as “water or juice” and force the pimply dehydrated teenager to drink something. I’d take the water.
Our role was pretty clearly defined, we install the NIC, we give them a 5 minute “tutorial” on how to use the Internet, we leave. We don’t do anything else on their computer, and under no circumstances do we ever use the CD that came with the install package, it bricks computers.
One day I did the install for an older gentleman from the middle east. After I connected him to the web he tried to load up a webpage, some news site from his home country. It wouldn’t render. He was missing the Microsoft font packs. Now our role was drilled into us pretty hard, if we did anything else and it went wrong there was liability on our company, and a serious amount of flak was about to come in our general direction. It could also create skewed expectations “Hey! The guy who did Sally’s internet installed a free virus scanner, and upgraded her Windows, why won’t you do that?”. Lots of problems.
But I installed those font packs.
Now, the gentleman didn’t speak a lot of English, so I had no idea how long he’d been in Canada, or how much news he’d been getting from home. But the emotion on his face when that page loaded… I understood what I’d really done. He handed me a crisp $20 bill, I tried to hand it back but I’d already lost him into that screen, there might have been a tear on his face I don’t remember, but it wouldn’t have been out of place.
That was a great day, and the $20 had nothing to do with it.