Calling Robocop: How law enforcement is using machine learning

Calling Robocop: How law enforcement is using machine learning

By SMU City Perspectives team

Published 16 May, 2024


“We don't necessarily worry about predicting specific types of incidents, like we're not trying to predict exactly where a murder will happen or exactly where a robbery will happen. We're just trying to predict how much time and police resources (manpower) an incident is going to take up so that we can plan accordingly. So this is what the machine learning model is going to do for us.” 

Jonathan David Chase

Assistant Professor of Computer Science (Practice)

In brief

  1. GRAND-VISION was developed to make police patrol scheduling and emergency response time more efficient.
  2. GRAND-VISION’s machine learning algorithm allows law enforcement to schedule and deploy patrols efficiently and effectively.
  3. The use of GRAND-VISION in other settings would require the establishment of available data; while being dependent on other cultural factors.

In dense urban environments, law enforcement agencies face a multitude of incidents that need to be handled with limited manpower. More and more of these agencies are turning to data-driven AI as a tool in their policing strategy. In this article, Jonathan Chase, Assistant Professor of Computer Science talks about the patrol scheduling system called GRAND-VISION: Ground Response Allocation and Deployment - Visualization, Simulation, and Optimisation, which he worked on as a research scientist. 

GRAND-VISION makes use of deep learning to generate incident sets to create a patrol schedule that can accommodate manpower, break times, manual pre-allocations, and a variety of spatiotemporal demand features. The system allows for complex scenarios that create results with real-world applicability for large urban law enforcement agencies.

How GRAND-VISION started

GRAND-VISION was a research project in collaboration with a local law enforcement agency and Fujitsu to automatically optimise the allocation of police resources based on crime prediction learnt through AI. This is done by adding factors such as past crime data, weather, and seasons of the day. It was a project under the Fujitsu-SMU Urban Computing & Engineering Corp. Lab (UNiCEN). Led by Professor of Computer Science Lau Hoong Chuin as its Director, the lab provided innovative capabilities and software technology to resolve urban problems with the manpower, space and transportation infrastructures already in place. The lab’s work focuses on multiple fields including public safety, urban mobility, last-mile logistics and optimisation of resources for decision-making. 

“While there is a lot of work done on AI and data analytics already, there has been less research on harnessing data to achieve data-driven optimisation of limited resources and apply this approach to real-world problems. The lab closes the gap by going beyond data for prediction and using it to help with decision-making.” Says Prof Lau, who was also the Principal Investigator of GRAND-VISION. “For example, after making predictions on where and when incidents may occur on a daily basis, the question was how to make decisions on hourly allocation and scheduling of patrol cars that minimise response time.”


GRAND-VISION began as an attempt in computing to see whether it was possible to use decision analysis tools to reduce the amount of emergency response manpower that would be required in a dense urban environment. This is particularly true in situations when there is a high demand for police response. The large number of emergency calls needs to be responded to as soon as possible, yet the manpower available to respond is not always enough. The system was created to attempt to generate savings in terms of staffing costs without compromising performance. This then evolved to tackle strategic advanced planning to predict how incident patterns would occur and manage targeted patrolling to improve emergency response time.

Different emergency calls have different urgency classifications. A minor incident might be considered non-urgent, while an ongoing crime or something that endangers someone's life, might be classified as urgent. Urgent cases require faster resolution and additional police cars. These factors determine how much police time is occupied by any one incident. Once a patrol car is attending to one incident, it then can't attend to any others, even if another emergency call comes in. 

The four parts

To address this issue, GRAND-VISION makes use of a four-part framework: incident generation, optimisation, scheduling and simulation. Asst Prof Chase elaborates, “So we have this sort of four-step process where we generate probable possible incidents about how tomorrow might look. Then we come up with a plan that says, how many people do you need in each patrol area throughout the day to respond to those incidents? We map that to the actual patrol staff so that we can generate individual schedules for distribution. Then we simulate to verify that our plan should work well in practice.”

Incident generation

The first step is identifying where incidents are going to occur. This is vital, because if a high number of incidents occur in one place and only one car is assigned to patrol that area, then they may only be able to respond to the first incident. If another incident happens then they would not be available to resolve the issue. “If we can predict where incidents are going to occur, then we can plan accordingly. So the first step is to predict on a daily basis where incidents are likely to occur. So we look at historical data and try to draw patterns based on location at time where these incidents are going to happen.” says Asst Prof Chase.

Once that information is gathered, machine learning methods can generate samples of how a particular day may play out. If a law enforcement agency wants to plan for tomorrow, it can use the predictive algorithm to generate different possible samples and different sets of incidents that may occur. These events may have some variation but generally follow the patterns identified.


The second step is optimisation. “I can take those incidents, I can put them into some kind of planning tool so in our case, we use a technique called optimisation. You basically have a target performance (like response time), a set of decisions you can make (like where police patrol at different times of the day) and a set of constraints (like the response time to an incident that determines that a car can't leave an incident halfway through and various other rules of engagement).” says Asst Prof Chase.

According to him, once these incidents are put into a set of equations that define the rules of the scenario, the solution algorithm tries to determine the choices and the patrol assignment that leads to the fastest possible kind of average response time to those incidents. This determines then how many cars need to be assigned to a particular patrol area in a particular time range, for instance, on a two-hourly basis every day.


The optimisation model provides an idea of where cars should be, given the historical incident data, but it doesn't provide an actionable plan. This is where scheduling comes in. The scheduling section takes those requirements, such as the number of cars, and maps that to the actual police officers and generates individual patrol schedules. These instructions are calculated to lead to the lowest expected response times per shift. 


“So the last piece is the simulator where we asked a machine learning model to generate a new set of sample incidents that also represent how tomorrow might play out. Then we just simulate a possible dispatch scenario. We read the incidents one by one and decide if this incident occurs, we would dispatch this person.” says Asst Prof Chase.

If that incident occurs, then the system can calculate the expected travel time and the expected response time. Overall it can determine that, if this scenario happened tomorrow, how well would the plan perform. This gives an idea of the effectiveness of the plan and whether it's going to perform well against new incidents.

Making use of Machine Learning

The incident generation stage makes use of machine learning. The strength of machine learning is that it can look at a large quantity of historical data and identify general patterns that start to emerge. These patterns are variables like locations with a higher probability of incidents and at what time of day, as well as some demographic information. 

Asst Prof Chase says that the programme tries to identify variables that determine why patterns change. It takes all the selected incident data, in this case emergency calls, and uses predictive policing models to generate patterns of where and when incidents tend to occur and what kind of incidents they are. 

The team behind GRAND-VISION experimented with different types of machine learning methods. The method used in the final version is called a generative adversarial network. This is the same kind of network that some image generators use. It comes with a randomised element to the samples that it generates. What this means is that the system can generate different variations of how the incidents tomorrow might play out. 

The system looks at the entire jurisdiction of the law enforcement agency and tries to predict on a daily basis how many incidents are going to occur. If the deep learning system is fed with 200 incidents, it will predict where they're going to occur, it will provide a location and a time that the incident might occur. The system will provide 200 incidents that follow the distribution patterns of what you would typically expect to see on that day.

“We don't necessarily worry about predicting specific types of incidents, we're not trying to predict exactly where a murder will happen or exactly where a robbery will happen. We're just trying to predict how much time and police resources (manpower) an incident is going to take up so that we can plan accordingly. So this is what the machine learning model is going to do for us.” explains Asst Prof Chase.

Ethical concerns

The use of machine learning and any form of artificial intelligence raises ethical concerns, especially in terms of existing biases and prejudices, because the data fed into it comes from people. “We have to be aware that data is very rarely genuinely objective in its nature because even if you're measuring something, the human has decided what to measure, what they think is important.” says Asst Prof Chase. 

There will always be human fingerprints on machine learning, so to use it responsibly is to recognise that humans make a lot of decisions in the creation of machine learning models. It is important to consider where the data source comes from. According to Asst Prof Chase, the reason there have been problems with some systems in the US, involving racial bias, is that data comes from police officer reports. What this means is that if a police officer has some sort of bias, then it is reflected in the data. 

In the case of GRAND-Vision, the data came from emergency call records. That means there is less of a direct human influence from individual police officers, because these calls are made by the population. This ideally leads to less bias in the system.

Applying this to other locations

While GRAND-VISION provides an excellent solution for law enforcement, it is limited by the infrastructure of the country it is being used in. 

According to Asst Prof Chase, the first challenge in applying this system in other countries is the data available “The most obvious foundational issue will be the existence of sufficient data to make reasonable predictions. When looking at a country like Singapore, it has more data than it really needs. Not every country will have initiated that kind of slow data infrastructure straight away. It takes time to accumulate a certain amount of information.”

Another item to consider is the culture of the country or a particular city's police force. According to Asst Prof Chase, it becomes important to take time to listen to perspectives on the problem from the social sciences. This is because there are other dynamics that need to be understood in the relationship between the population and the police. “If you have trust between the population and the police, they're more likely to be willing to call the emergency number. That means your emergency records are more likely to be a true reflection of the ground situation. If there isn't (trust), then people may call reluctantly, causing your emergency calls to only become a section of the reality.”

Methodology & References
  1. Chase, J., Phong, T., Long, K., Le, T., & Lau, H. C. (2021, June). GRAND-VISION: An intelligent system for optimized deployment scheduling of law enforcement agents.
  2. Goldenberg, P., & Gips, M. (2024, March 25). Ai is set to revolutionize policing: Are we ready?. Police1.
  3. Srinivas, N. (2023, February 24). The ethical debate of AI in criminal justice: Balancing efficiency and human rights. ManageEngine Insights.