Leveraging AI for Smarter Policing

Home / Leveraging AI for Smarter Policing

While we can never remove all bias, AI solutions can mitigate or minimize bias as much as possible. AI can also help police officers become more efficient and increase their overall patrol time in the community with automated report writing and other time-saving AI features.

Modern police patrol in the United States is crucial to maintaining public order, deterring crime, allowing for rapid response to incidents, and facilitating community engagement. However, modern police patrols face challenges regarding staffing shortages, and limited resources can impede effective, round-the-clock patrolling. In addition, the growing complexity of urban environments and heightened expectations among the public require increasingly sophisticated and adaptive patrol strategies to ensure public safety. As technology improves, leveraging AI (artificial intelligence) for smarter policing becomes critical for law enforcement agencies.

Currently, the management system for police patrols tends to be involved and prone to error. Using outdated reports and gut instinct, agencies manually analyze data to determine where and when patrols should go, sometimes resulting in over-policing, biased decision-making, a lack of transparency or accountability, and overstretched analysts. However, by leveraging artificial intelligence (AI), we can create time-saving practices to improve patrol management.

AI as a Tool for Modern Patrol Management

AI has the potential to revolutionize patrol management by leveraging data to optimize resource allocation and response strategies. A 2023 study found training AI effectively can work with patrol management to better serve the community. It can improve patrol deployment when and where it’s most needed, arm officers with real-time information and analytics, and enhance their situational awareness and decision-making capabilities. It is important to note that AI is not meant to replace police officers. Instead, AI supports officer safety and knowledge through enhanced data and analytics in real-time.

Common Challenges in AI Implementation

Implementing AI in policing is only beginning. As growth continues, so do the challenges for police administrations, data analysts, and AI developers.

  • Evaluating Effectiveness: Quantifying the effectiveness of preventative AI-driven patrols can be complex since proactive measures often prevent incidents before they occur. As a result, measuring the direct impact of specific actions can be tricky. Accurate data collection would involve partnerships between the vendors, industry, and academia to set up accurate experiments, requiring the buy-in of police agencies, local governments, and the community.
  • Continuity and Leadership: Leadership changes are common in modern policing, and new commanders may discontinue or slow the practices instituted by their predecessors. This may lead to inconsistencies in data and evaluation and limit the progression of AI in policing applications.
  • Lack of Feedback Mechanisms: The lack of a structured feedback loop intended to evaluate the effectiveness of AI systems can lead to incomplete data or inefficient and underutilized AI capabilities.

The upside of AI solutions is that they have the potential to make the challenges worth tackling. AI promises to deliver precise crime forecasts and real-time data analysis while minimizing bias. While we can never remove all bias, AI solutions can mitigate or minimize bias as much as possible. With automated report writing and other time-saving features, AI can also help police officers become more efficient and increase their overall patrol time in the community.

Ethical and Legal Considerations for AI Implementations

Several ethical and legal issues must be considered to implement AI strategies in modern policing without harming the trust between agencies and their communities.

  • Bias and Discrimination: One of the most widely discussed ethical challenges regarding AI-based solutions to crime prevention is the possible introduction of increased bias and discrimination in policing. Bias can be introduced at multiple stages of the development process of AI-based solutions.
  • Modeling: The first point of implementation where bias can be introduced is deciding what crime to model. Using data modeling tends to bias organizations toward attempting to solve questions that are easier for computers to understand. For example, property crime predictions are a common goal of predictive policing because burglars tend to be territorial, and geographic analyses are relatively easy to create. This can skew police patrols towards property crime because the demographics of property crime are not the same as those of other types of crime.
  • Sampling: Data sampling is another possible source of bias. AI systems learn by example, so for AI to be unbiased, the data on which the AI is trained must be representative of the community. However, data can be skewed based on historical practices. For example, if police have previously concentrated efforts on certain types of crime, that crime will be overrepresented in future data, causing what’s referred to as a “positive feedback loop.” AI systems will send more police officers to those areas in the future, exacerbating the imbalance.
  • Elements of Analysis: Feature selection is still a third area that can introduce bias. Police departments and patrol management system developers must choose which elements to observe and fold into their analyses. They need to decide whether to consider only basic things, like location and time of crime, or a broader range of factors, such as socio-economic status. Different choices about which features to use to predict crime can affect outcomes differently, including the distribution of police patrols to neighborhoods with varying demographic profiles.
  • Legal Challenges: AI-based patrol management works through statistical inference about crime at a location, but citizens’ detention and arrest cannot be justified based on inference. It remains a legal question whether a block labeled “high risk” for property crime changes the reasonable suspicion standard for stops, searches, and seizures on that block.
  • Transparency Issues: Due to the complex models that police departments use to make predictions, AI can add a layer of mystery to their decision-making. While this complexity allows for impressive levels of accuracy in directing police patrols, it makes it hard for a non-expert to understand why the AI made the prediction that it did.
  • Community Trust: All of these issues, particularly the risk of increased bias and reduced transparency, can lead to a lack of community trust in AI-based patrol management systems and in the police themselves.

The positive benefits of AI-based patrol algorithms, including increased patrol efficiency and assisting agencies during staffing shortages, require work during the initial setup and thorough monitoring to ensure transparency and increase community trust.

Reducing Bias in Modern AI-Based Patrol Algorithms

Modern AI-based patrol algorithms already do several things to reduce the potential for bias. For example, ResourceRouter, a product produced by SoundThinking, bases its crime modeling on crime types that are predominantly reported by citizens and victims. These tend to be the crimes that citizens believe are the greatest threat to community safety, as opposed to crimes police discover while on patrol.

In addition, it is possible to remove features from patrol management algorithms that strongly correlate with race and ethnicity to reduce demographic bias without compromising predictive accuracy, such as the percentage of the population below the poverty line, the percentage of rental houses in an area, and median household income. However, which specific features can be removed varies heavily by city and crime type, so per-city correlation analysis must be used consistently to remove bias in future AI operations.

Ethical Best Practices for Agencies Interested in AI-Based Solutions

When searching for an AI tool to use at your agency, it is important to consider the cost of the solution, its user-friendliness, security and privacy compliance, and scalability. However, it is also important to consider whether the company adheres to some ethical best practices regarding the use of AI. Some of the best practices to consider include:

  • Law enforcement agencies and technology firms can work together to conduct and publish regular comprehensive audits of AI systems to monitor and correct for biases.
  • Technology companies can draw on diverse stakeholders to develop ethical frameworks for their solutions. Ideally, this would be paired with a formal ethics office that ensures compliance with these guidelines.
  • Ethical and legal lapses tend to happen at the point at which technology companies hand off their product to a new client. To mitigate this, technology companies can initiate training and awareness programs for police departments on the ethical use of AI, possibly even extending those classes to the community.
  • Technology companies can implement transparency into AI algorithms for external audits and stakeholder scrutiny.
  • Police agencies and technology companies can establish clear protocols for AI decision-making, ensuring human oversight and compliance with ethical and legal requirements.

AI has the potential to revolutionize modern policing. It can reduce crime, increase community trust, and save officers hours. Despite the challenges involved in implementing AI solutions to police patrol management, by adhering to certain ethical and legal practices, we can reap the benefits of AI while managing the risk.

To learn more about the ResourceRouter solution for patrol resource management, contact SoundThinking.

Schedule a Call

Simen Oestmo
Author Profile
Simen Oestmo
Simen Oestmo received his BA in Social Science (Archaeology) from University of Tromso, Norway, and then...Show More
Simen Oestmo received his BA in Social Science (Archaeology) from University of Tromso, Norway, and then a MA and PHD in Anthropology (Archaeology) from Arizona State University. He began his career as an Associate Statistical Researcher at Denver Police Department in Denver, Colorado. He then worked as a Data Scientist at SoundThinking. He used analytics, statistics, and machine learning techniques on a diverse dataset to improve feature engineering and machine learning modeling. He currently works for SoundThinking as a Director of Data Science where he directs a team that uses machine learning and artificial intelligence for precision policing. The team he directs also works on improving feature engineering and builds ETL pipelines for data needed for modeling and analytics.Show Less
Search