Sunday, November 20, 2016

Will Liquid Handling Robots Ever Join the 21st Century?

In the course of this blog, there are many topics I've thought about writing that I haven't touched.  Sometimes it is due to the problem of the topic being too revealing to what I am working on, sometimes it is because I'm not satisfied with the result, but far too often I procrastinate so long that it no longer seems fresh.  Or I'll just do it another time when the moment is right, which it never is.  But a recent Twitter exchange reminded me of a long-suppressed lament on some expensive, finicky and problematic -- but very useful -- denizens of a modern lab: liquid handling robots.

Nick Loman was needling Josh Quick via Twitter about whether Josh would set up a complex set of reactions: do it manually or program the robot:






Before I start, I really should cover my nearly non-existent bona fides. I haven't actually done any robot programming, but I did come close at the first starbase.  We had purchased a Janus system from Perkin-Elmer (that's what they are now; could have been Caliper then).   I figured learning how to program it would be a useful skill, so sat in on the training class.  Then two things happened: I realized we were well covered with a senior research associate I had recruited in from the old Codon team, plus the two large bones in my left leg had a skiing-induced altercation that sent me to hospital and rehabilitation for a week.  When I came back, my usual challenge with not bumping into things was now augmented with crutches, so I had no business hanging around an expensive robot in an increasingly cramped lab.

However, in that time I made a few observations that jibed with what I had seen at Codon.  Observations that I suspect are still reasonable valid (well, I fear they are completely valid).

First, programming these robots correctly is far more complicated than I ever imagined.   These would be a natural place for a good visual programming language, but I don't believe the vendor supplied one.  That's problem number two: the command languages all seem to be proprietary.  Each robot is different, but that's no excuse.

 When I first started being interesting in programming, both parents taught me that computers do precisely what you tell them to, nothing more nothing less.  Our robot, and I suspect the others, follows this maxim with painful excess.  Not only do you need to program gross movements, but you need to calibrate them for the precise plasticware you are using; tiny differences between plates are huge differences for the robot.  Mess this up, and your pipet tips will crash into the bottom of the plate or eject the liquid off-center in the well.  I agree with the one tweet above that our local instrument rep was great at programming the robot, but that's not really a satisfactory process for a rapidly-changing research environment.  If robots are going to contribute to a rapid idea-experiment cycle, easy (shall we say "fluid"?") programming is essential.

Worse, these expensive robots have absolutely no situational awareness, as they have nearly no senses with which to check their instructions.  So if you load the wrong plasticware, or put the wrong fixture at the wrong location, or forget to actually put plates on the deck, the robot will merrily follow its instructions without any complaint (well, unless you cause a physical crash).  About the only sensor I can remember on our Janus is a barcode reader, which was an extra.  Painfully, that reader is on the deck, so extra steps must be used to employ it.  A particularly unfortunate mistake would be to put the tip and plate dump chute in the wrong location, so that discards pile up on the deck.  Far messier is to have it in the correct location, but without a correctly positioned and empty trash bin under its aim.

This is utterly bizarre in a world with self-driving cars. An obvious fix would be to put a small camera on each moving head, to actually scan the deck for correspondence with what the program is expecting.  Something as simple as QR-coding each fixture would enable the robot to match physical fixture layout with virtual layout.  Better yet, shouldn't the robot be able to determine if plates have actually been put in position?  Precisely identifying plate types with just visuals (or perhaps augmented with ultrasound) is probably too far, but that doesn't mean the system couldn't catch simple errors, such as putting a 96-well plate when a 384-well is required. Of course, if you barcode the plates early with your own barcodes and have a good LIMS (allegedly these exist somewhere), the LIMS and robot could collaborate to enforce correct labware.

Why can't Janus join sisters Alexa, Cortana and Siri and have some smarts?  "Janus, I want to replicate four plates with 10 microliters diluent added -- how do I set this up?"  Programming the robot is a major barrier to using the robot; making it easier to run common situations should be a goal.

There's also a question of efficiency; many robot programs can accomplish the same task, but not all are equally quick or stingy with tips.  We had a consultant, "The Robot Whisperer", who could significantly tighten up a program.  Even our manufacturer's rep, who's a wizard with robot coding, is in awe of her skills.  But again, scheduling a consultant can be a frustrating delay.  With better higher level languages, shared across hardware, the equivalent of code optimizers could be built. These would probably be interactive to a degree, as sometimes an optimization will risk something important (such as contamination) or simply involve trading off time versus consumables.

I'd love to be wrong on these and have someone point out in the comments that there exist lab robots which incorporate these very features.  I don't have much hope of being wrong, but for once that is my leaning.  Liquid handlers are powerful instruments, but that power is diluted by the arcane nature of programming them.



5 comments:

Anonymous said...

My alma mater built a showpiece building for new labs with a huge atrium housing my favourite cafe. They opened a new facility for creating monoclonal antibodies, with all glass walls to show off, right next to the cafe . At every opportunity they touted this place as 'high throughput with the latest liquid handling robots'; they had at least 3 or 4 if I remember correctly. I had coffee there 2-3 times a week for about 2 years after it opened, I remember only once the robots actually being used, and I'm pretty sure that time was by a sales rep. I think they have finally sorted it out now, but they lay idol for a long time.

Jonathan Badger said...

Some of the things you suggested (better error handling and so on) probably would be feasible, but the comparison with self-driving cars is interesting because we are finding out from people getting killed by their Teslas on autopilot that the problem is rather more complicated (and likely decades away from being practical) than in the picture of a nearly-complete solution presented in Google demos. While having an experiment fail due to bugs isn't obviously as bad as slamming into a trailer, the dangers are analogous.

Daniel Swan said...

You may not have come across Antha yet..

https://www.antha-lang.org/academy/run/lha/index.html

Great if you have a Cybio or Gibson liquid handler. Not so great for Hamilton/Tecan/Perkin Elmer

Unknown said...

The two systems I've trained on (Hamilton and Tecan) have absolutely horrible GUI based programming systems a proper language would make things a lot better. Your also right about the near complete lack of sensors the robots I'd worked on only had very fluid detection sensors when used with specialised (and expensive) tips. What I was always amazed about was they didn't even have any force feedback sensors in the heads. As a result I've seen a robot ram it's tips hard enough into a 96well plate that they bent locking the plate in place then lifted up carrying the plate with it and slamming it down into the next plate knocking the head out of alignment and spilling the plates across the deck. Did enough damage that it cost a few thousand to realign the head and replace some parts that got damaged. I sometimes wonder if this is why they don't have the force feedback sensor, add a hundred dollars worth of sensors to the robot and now the manufacturer losses thousands of dollars of repair and replacement revenue each year...

Anonymous said...

It is likely a matter of scale. Tesla can ammortise their development costs over few obscenely priced units for the one percenters, and in some ways they ride the infrastructure created by the big boys. Siri, Cortana, etc development costs are amortized over billions upon billions of devices. All you ask for is entirely possible, but has steep upfront capital costs that somehow need to be recouped. The most likely path to this is some startup that sells the stuff to the one percenters of the corporate world and it eventually trickles down. Unfortunately VCs mostly seem interested in investing in the next Grindr or Tinder, not in a startup with at least a decade long horizon and steep capital needs. Most of your wishlist was around UI and usability, but the whole architecture can use rethinking. There are way too many moving parts and needless bulk. Current architecture seems to follow through "suburban sprawl" mentality. However, going multilayer vertical with complex elastomer fluidics is likely the path to overall simplicity, modularity and upgradeability.