Over the last decade, police departments across the U.S. have spent millions of dollars equipping their officers with body-worn cameras that record what happens as they go about their work. Everything from traffic stops to welfare checks to responses to active shooters is now documented on video.
The cameras were pitched by national and local law enforcement authorities as a tool for building public trust between police and their communities in the wake of police killings of civilians like Michael Brown, an 18 year old black teenager killed in Ferguson, Missouri in 2014. Video has the potential not only to get to the truth when someone is injured or killed by police, but also to allow systematic reviews of officer behavior to prevent deaths by flagging troublesome officers for supervisors or helping identify real-world examples of effective and destructive behaviors to use for training.
But a series of ProPublica stories has shown that a decade on, those promises of transparency and accountability have not been realized.
One challenge: The sheer amount of video captured using body-worn cameras means few agencies have the resources to fully examine it. Most of what is recorded is simply stored away, never seen by anyone.
Full story: Police Departments Are Turning to AI to Sift Through Millions of Hours of Unreviewed Body-Cam Footage
This article explained how police body cameras are meant to increase trust and accountability and ended up with a ton of unwatched footage. So while this is happening it makes it hard to spot bad behavior or improve training. This is a big problem because if no one is watching these videos then what is the point? This could worsen the relationship between police and communities. In fixing this, police departments need better ways to handle all the videos they record and make sure someone’s watching them. They also need to act when they find problems, so people can trust that the cameras are actually improving things.
Now that body cameras are such a common thing in policing, there is simply too much video content to sift through everything. I think the mentality behind body cams needs to change a bit to be more realistic. Having camera footage is invaluable in addressing any specific complaints to a department. If a community member was harassed or had a horrible experience, then the department can go back and see exactly what happened. Using cameras for this purpose rather than trying to rate job performance and review everything seems like the best way to handle the issue.
I agree Dylan. Agencies should release footage regardless of how it makes them look. By using AI, the human factor is removed. The videos that are being reviewed are more likely to be processed correctly and efficiently when using AI.
I agree that one of the reasons for implementing body cameras is to increase trust and accountability with officers however I do not think it is practical to use them in a way to use them in a way to micro manage individual officers. Trying to watch all of the footage in order to spot bad behavior is unreasonable for really any department. There is definitely a need for better methods when it comes to sifting through the footage but the needs for sifting is a question to present as well. The majority of instances when BWC footage is examined closely are those in which there was an incident or a complaint, these can be easy to locate. Sifting through all of the general footage for the purposes or audits or training could be done by a good AI system or a separate department.
I agree that someone should be watching the video footage considering there is already mistrust between the community and law enforcement. The widespread adoption of police body cameras underpins a commitment to transparency and accountability within law enforcement. However, the prevalence of unwatched footage poses a significant challenge to realizing these democratic ideals, because there is the inevitable, what if something goes wrong? Without proper oversight, instances of misconduct may go unnoticed, impeding efforts to improve police practices and community relations. When issues are identified, action must be taken to address them. By prioritizing the responsible use of body cameras and actively responding to concerns, we can uphold the values we are striving to meet and foster a more just and equitable society where all citizens can trust in the fairness and transparency of law enforcement.
What are some of the negative and positive implications of having Body camera footage reviewed by AI? Some might say that AI cannot pick up on some key behaviors that contribute to the escalation of a situation. Although, with the amount of footage that needs to be reviewed there doesn’t seem to be a more efficient way for these recordings to be examined. perhaps the best route would be a combination of AI and human examination. I think a more practical application for this would be to have AI examine recordings of problematic interactions and then a sample of good interactions to determine positive and negative behaviors within law enforcement.
I think that if the AI was trained with a multitude of examples showcasing bad interactions, then it would be capable of finding them elsewhere. With the vast amount of footage that needs to be sifted through, AI would be of the most helpful tool to do so. Your idea of the sampling is, if I think you’re saying what I think you are, would probably be the only way to make this work. Combining human examination and AI would make it the most sense, but it would take longer to combine the 2 methods; This would just increase the workload on the human side of the work.
What are the legal implications, specifically with burden of evidence and due process, in the context of using AI to sift through body camera footage? Whilst using this sort of AI to find possible incidents and other evidence related to cases could prove useful, will it hold up in the court of law? Without the involvement of a person searching for this evidence, it falls into a grey area. It is certain that with or without the AI, the evidence found could still hold up, there is a chance that whatever evidence is found could lack context or be incomplete. This could lead to more issues with an ongoing case, and could even lead to a mistrial if the evidence is incomplete enough. Aside from the potential downsides, this move to use AI will undoubtably make the process of searching through body camera footage easier, and it could even unveil possible crimes committed by police officers such as abuse of force.
I can see the benefits of using AI to review body cam footage. You can program the AI to look for specific things like an officer cutting off a civilian while they are trying to talk or vice vera, you can ask it to look for aggressive behavior. The use of body worn cameras was suppose to allow for police to be more transparent so why have they become less transparent than before? By using AI to look through the body cam footage, they are finding things that make them look bad so they can chose not to release that particular video.
How ethical is AI when watching body camera footage? That job is meant for an officer in charge to do, not AI. AI can’t always detect social environments like a person might be able to. Perhaps a combination of both AI and humans might be a solution to this problem. While there are some advantages to AI reviewing the footage how are we sure that the AI is looking at all the possible outcomes from the footage? With these new ways of reviewing footage, how do we know if the AI is picking out footage to make the officer look good? Having a combination of both AI and humans might be a solution to this problem.