The Art of Embracing Change
As we enter 2019, artificial intelligence and robot use in medicine has made a giant step beyond the early adoption stage. The progenitor of surgical robots, the hulking Da Vinci system for minimally invasive surgery, is nearly two decades old.
Now, newer devices, like the Axsis, which has shrunk the size of its prototype robot down to the size of a soda can, manipulate tools as small as 1.8 mm in diameter. We also have robots that can be swallowed or inserted inside the eye and don’t forget the major leaps that have taken place in the development of an artificial eye.
It’s natural that us humans feel threatened. Everyone from factory workers to physicians are left wondering if they will become obsolete. The answer, depending on who you talk to, is good, bad, or neither. Artificial intelligence and bots will be the most disruptive technology to intersect with humans since cavemen used fire.
There’s much to celebrate here. Innovations in surgical robots will enhance procedures for doctors and surgeons alike. The expertise of the surgeon remains paramount, but robotics like motion scalers and minimally invasive access may lead to better outcomes. Another possibility is new procedures made possible by robotics.
Work is already progressing in the direction with intraoperative imaging and automated anterior segment surgery with a femtosecond laser-enabled keratoplasty.
And cataract surgery assisted by a femtosecond laser has improved the safety of procedures like dense nuclei, pseudoexfoliation, and patients with Fuchs dystrophy. It’s exciting to consider how these tools will evolve in the future.
Advancements in today’s camera viewing systems allow for surgery using 3-D high-resolution monitors. Integrating OCT imagery and ultrasound can even allow procedures on structures too tiny for a surgeon’s eye to resolve.
Making machine learning work for surgeons
Artificial intelligence has enabled robots to accomplish amazing feats at superhuman speeds. When it comes to eye surgery, however, you’ll need to train it. The EventuMachine is designed to mimic complex movement patterns.
Combining cameras and OCT simultaneously, a machine can calculate the precise direction and vacuum needed to pull an epiretinal membrane; or applying vacuum to the cortical lens material.
The nature of work is bound to change for everyone. But there will never be a substitute for a surgeon who can respond in real time to a crisis. And, as of yet, there’s no program for empathy. So, embrace the new technology where it can help you overcome human limitations, and you can take part in a new world of healing.
RETINA-AI has released an Android version of its Fluid-Intelligence app, the world’s first mobile artificial intelligence (AI) app for eye care providers.
Fluid Intelligence uses AI to detect macular edema and subretinal fluid on OCT scans of the retina. The eye care provider uses the app to take a picture of an OCT. The captured image is then sent to the Cloud where RETINA-AI’s proprietary Machine Learning algorithm determines the diagnosis and generates a report in real time.
With 90% accuracy, says RETINA AI, the app detects macula edema and sub-retinal fluid. It is an excellent screening tool for diabetic macular edema, exudative macular degeneration, retinal vein occlusion edema, cystoid macular edema after cataract surgery, and mac-off retinal detachments.
Fluid Intelligence by RETINA-AI can help with questions like “Does my patient need an eye injection?”
RETINA AI research continues with the DATUM alpha study, a multicenter retrospective image analysis validation study of Fluid Intelligence, so stay tuned.
DATUM stands for Diagnostic mobile Artificial intelligence Technology for detecting fluid Underneath and Inside the Macula. In the study, Retina Specialists, who have fellowship training in the retina, will compare their assessment of OCTs to the assessment of the AI app. Initial results are in press and show a sensitivity of 90% and a specificity of 82.5%. More studies will follow DATUM as RETINA-AI continues to develop and train AI algorithms.
Alphabet’s Deep Mind Tackles Healthcare, Stumbles on Privacy
Alphabet, Google’s parent company, also claims Deep Mind as its progeny. Last year, the artificial intelligence (AI) company taught itself to play the game Go better than any human on the planet. And now it’s turning its considerable technological power on health care. AI is already becoming interwoven with healthcare. But there are boulders in the path.
London-based Deep Mind ran into a couple with the testing of its first app, a mobile-based program that alerts doctors and other caregivers to fluctuations in a patient’s condition. Other investigatory efforts through the UK’s National Health Service include an app that aims to analyze medical imagery as well as experienced doctors.
The first potentially fatal condition DeepMind built an alert for was acute kidney injury (AKI). To test it, London’s Royal Free Hospital illegally provided Deep Mind with 1.6 million patient records. Can you imagine our regulators freaking out?
London’s Royal Free Hospital and Deep Mind avoided fines but hospital disagrees that the mobile app, called Streams, could have been tested another way.
The incident has chilled the research with an Arctic blast, but, as new technology floods the market, it’s a cautionary tale about the need not to circumvent ethics and patient privacy.