The Little iPhone That Cried “Wolf”

There is little doubt that Apple’s investment in new technology for its iPads, iPhones, and Apple Watches—and its encouragement for third-party developers to likewise invest—has led to some amazing new applications that have proved invaluable to consumers. Health- and fitness-related features have proved especially popular, ranging from Fitbit tracking basic activities such as steps, heart rate, distance covered, and floors climbed to customized apps such as those alerting diabetics of dangerous changes to their glucose levels.

via Adobe Creative Commons

Artificial intelligence (AI) and machine learning have become fertile ground for such developments. This technology underpins “Crash Detection,” which is available on the latest iPhone 14 and Apple Watch Series 8, SE, and Ultra. Using movement sensors and AI software, it is designed to “detect severe car crashes—such as front-impact, side-impact, and rear-end collisions, and rollovers—involving sedans, minivans, SUVs, pickup trucks, and other passenger cars.” When detected, the phone plays an audio alarm and displays an alert and emergency call slider. Using haptics, the watch taps the wearer’s wrist and displays a similar slider. The alerted user can choose to call emergency services or cancel the alert. However, if the user is unable to respond, after 20 seconds, the device automatically calls emergency services and shares the current location.

To be sure, there are notable examples of lives being saved by the feature. However, as it has become more widely diffused, instances of an increasing number of false positive alerts are starting to place pressure on emergency services. While the first reports in October of roller coaster rides accidentally triggering alerts were mildly amusing, the rash of reports over the Christmas season from winter recreation (skiing, snowboarding, and snowmobiling) has caused concern among emergency dispatchers in ski resort towns, where up to five such false calls a day have been reported. The false alarms are not easily separated from genuine alerts, so dispatchers have had to implement new protocols to check each call before deploying first responders. However, each additional process that must be undertaken adds to the time it takes to respond to genuine crises—from both the devices and any other source. The effect is not dissimilar to the pressures placed on COVID-19 testing facilities from false positive alerts from COVID-19 contact tracing apps early on in the pandemic.

The key difference between the COVID-19 tracing and crash alert features and the personal health and fitness tracking functions is that the former create externalities involving third parties when false positives are detected. The more popular the apps become, the greater the likelihood that a false positive will be generated—which in turn causes an increasingly larger negative externality affecting other parties. This is the cost not just of responding to the false information but also in downstream effects to others whose legitimate use of the resources is compromised (i.e., individuals who actually do have COVID-19 or injured individuals who have to wait longer for emergency services to be dispatched than if false alarms were minimized).

Such effects pose ethical questions regarding increasing use of automated response applications. Should any or all such apps be made freely (i.e., unrestrictedly) available, simply because they exist? Or should their availability be subject to some form of regulation? Or perhaps something else?

Given that many apps are in the health domain, it warrants consideration of the levels of testing and disclosure that purveyors of medical tests must undertake before making them widely available. First, there must be some reasonable levels of testing undertaken in realistic circumstances (not just laboratories or technology incubators) before they are made available. Second, their release must be accompanied by information of their expected activity (the probability of any given app triggering an alarm in a given time period), sensitivity (the likelihood of any given alarm trigger being correct), and specificity (the likelihood that no alarm means nothing is wrong).

Such information allows informed assessment of the likely effects on both the individual owning the device on which the app is installed and third parties. Cost-benefit analyses can, for example, assess whether the additional costs imposed on first responders and affected third parties from false alarms are outweighed by the benefits of faster emergency response.

While it may yet be premature to call for explicit regulation of these apps, it behooves their developers to consider their social responsibilities when developing them. Voluntary testing and disclosure of the relevant information would appear to be a valuable first step. At the least, it signals to application users that their choices to deploy these apps have wider societal consequences.

The post The Little iPhone That Cried “Wolf” appeared first on American Enterprise Institute – AEI.