• 10 MIN

AI in the workplace: First steps for making organisations future-ready

AI in the workplace: First steps for making organisations future-ready

By SMU City Perspectives team

Published 1 February, 2023


Stop thinking about AI as this ‘magic thing’ and get back to your basic problems. Nearly anything that you'll want to do with digital technology and IT will involve the use of AI methods to enhance what previously could be done. If you approach it with a clear-eyed, sensible approach like that, and not focus on AI sophistication for its own sake, you will make good progress.

Steve Miller

Professor Emeritus of Information Systems and former Vice-Provost (Research)

In brief


  • Instead of feeling intimidated by the sophistication of AI, organisation leaders should approach the technology with the basics in mind - by focusing on the existing problems that need solving, and then identifying specific ways that AI can enhance the solution.
  • While some existing jobs will inevitably become automated, a larger number of existing jobs will be augmented through the use of AI. In addition, new types of jobs will be created as a result of the emergence of new products and services made possible through AI enablement. 
  • Governments can facilitate this transition in three ways: by supporting employees to do reskilling and upskilling, by creating outreach ecosystems where companies can get technical guidance in developing their first few AI application projects, and by making sure that business tax policies equally incentivise investing in capital equipment as well as investing in people and skill development.


As a result of Artificial Intelligence (AI) and other developments in digital technologies, the nature of work will inevitably and radically change. What might this evolving world look like? According to Steven Miller, Professor Emeritus of Information Systems, we are already witnessing a growing number of examples where human work is being augmented and amplified by AI-enabled smart machines and this situation will become much more common across a wider range of organisations. This was a key message in his recently released book published by MIT Press, “Working with AI: Real Stories of Human-Machine Collaboration” that he co-authored with Thomas Davenport.  The book features 29 case studies of people doing their everyday work with AI-enabled systems as well as seven synthesis chapters based on insights extracted from all of the case studies.   

In this video interview (see below), Prof Miller shares his views on how organisation leaders can begin this transition to working with AI. He also comments on other implications that governments and policymakers need to take note of.  

What insights come to mind?

What insights come to mind?

Click to respond and see what others think too

What makes you skeptical?

We read every single story, comment and idea; and consolidate them into insights for our writer community.

What makes you curious?

We read every single story, comment and idea; and consolidate them into insights for our writer community.

What makes you optimistic?

We read every single story, comment and idea; and consolidate them into insights for our writer community.

What makes you on the fence?

We read every single story, comment and idea; and consolidate them into insights for our writer community.

Story successfully submitted.

Story successfully submitted.

Thank you for your story. We'll be consolidating all stories to kickstart a discussion portal in our next release. Subscribe to get updates on its launch.

I consent to SMU collecting, using and disclosing my personal data to provide information relating to XXX offered by SMU that I am signing up for/that I have indicated my interest in.

I can find out about my rights and choices and how my personal data is used and disclosed here.

Q1: What hinders organisation leaders from deploying AI into their work settings?

I think one of the biggest obstacles we have in terms of various kinds of organisations dealing with the use of AI in the workplace is they think AI is special. We need to demystify that. We need to get our feet on the ground. There is a wide body of methods that we refer to as AI (because AI is not one thing, AI is many things and across a whole technology stack for that matter).  Rather than saying, “we're going use AI,” actually, we need to stop saying that and we need to be more specific and down to earth - like, “we need to improve our predictions. We need to improve our recommendations. We need to improve our simulations”. Well, you say, “but that's not new.” Well, that's the point. Bring it back to the real work, the basic work. In almost anything we do today, we will use AI methods to enhance the way we do it. So, frankly, stop thinking about AI as this magic thing to use and get back to your basic problems, get back to your real use cases. And nearly anything that you'll want to do will involve the use of AI methods to enhance what previously could be done. And if you approach it with a clear-eyed, sensible approach like that, and if you don't forget that good use cases are the key to everything, not the degree of AI sophistication, that you will make progress. But if you think about AI as something “special” or even mystical, you're going to have problems with your deployment efforts. 

Q2: How will AI impact the labour market? 

I want to talk about the issue of the impacts of AI-enabled technology on the labour market. So, the thing that just comes to everybody's mind immediately is job loss. What will be the extent of the job loss? Look, we have a lot of history with using technology in the workplace, not just in the last few years, but over hundreds of years. Of course, there are going to be some types of jobs that are so regularised and, in a way, so routinised that you can automate them. And with AI-enabled abilities, the extent, to which some jobs could be automated, the extent of what can be automated, can be expanded. So that's undeniable. You will have an expanded range of jobs that can be automated but it doesn't stop there. That's the key point.  More jobs will be augmented, complemented, where the AI-enabled tools will become super tools and people together with the tools will be able to do things they couldn't have been able to do previously. This isn't a pipe dream; it's already happening now. We have examples in the book that describe this, and we've seen similar things happen with the introduction of prior generations of technology over prior decades and centuries. There will be a lot of new jobs created. We'll be able to do new things in energy, in biotech, and as a result of 3D manufacturing. And we'll be able to create jobs and services that didn't exist before.  

Now, there will be some transitional issues. Some of the jobs displaced will be different than the jobs created, so there will be a need for recycling and redeployment of the displaced. And that's where we need some of the government help. But we have shown that the capacity to create new kinds of work using new technology is almost limitless. So, that's what we have to put our minds to doing. That's the task in front of us. And it's not a prediction of how many jobs will be displaced or not displaced, because no one knows. And the real issue is, it depends. It depends on the choices that the governments and the companies make, and it depends on the attitudes of how they go forward with using this stuff. 

Q3: What can workers do to prepare for the age of AI? 

So, what are some of the specific things that, for example, the students of today can do to prepare for AI? Now, there's the obvious part of people learning the actual detailed innards of the technology, the math behind the algorithms, how to do the software to code the models. Of course, you need people to do that. But in any project, the number of people who do that kind of work is actually a very small number of people in proportion to the total people you need to do the project. You need the domain experts. You need the people who know what the real business problems are. You need the expertise to be able to evaluate the output of the AI systems. Just because you get an output from an AI model doesn't mean it's correct, right? So, you need people who really understand what false positives and false negatives mean, what the error rates are and what they mean, in a particular problem and domain setting.  And who can evaluate if this is a prediction (the output of the AI model) that makes sense.  People in organisations doing economics, people in accounting, people in finance, they need to be strong in their domain and they need to be strong in their ability to judge and verify if this information (the output of the AI model) make sense. If so, I use it. If not, what kind of feedback do I give it (the AI system).  So, this notion of the human employee partnering with the AI system becomes very important. 

Some people will do the technology work to create and deploy these AI enabled systems and that is fine. We need that, of course. But we need many, many more people in the various parts of the domain who really understand what grounded truth is, so that as we use these systems, we don't have unrealistic expectations, and we find the right ways to put the human capability and knowledge together with the machine capability and knowledge. 

Q4: What role do policymakers play in this transition to AI-enabled workplaces? 

Let me say something about the first step that I think governments need to take in creating the environment so that the companies and organisations, both private and public sector, profit-driven and non-profit driven, can proceed with using new technology, including, of course, AI-enabled technology. Because I take it just as a given that essentially any new technology going forward, and even as of today, will be AI-enabled. And the first is to have policies that do not undervalue the importance of humans. Now, you might say, what do I mean there?  In some countries, let me take like the United States in particular, the tax policies are set up that are very favourable if you bring in (invest in) capital and to some extent penalise you if you retain human workers on your payroll because of the extra payroll taxes that you have to incur. This illustrates that in some countries, there is a special kind of tax incentives that you get if you invest in more capital and you in effect get penalized from a tax perspective by employing humans. Of course, we want people to invest in new capital as this is a means of investing in new technology for the company, but we don't want companies to be penalised for retaining their human workers for goodness' sake. And if we get these kinds of policies aligned, certainly in the U.S. and perhaps other countries that might follow some degree of that mindset, it will make a difference. We need to be investing in technology, in machine-based capital, as well as human capital. And we want to make sure the institutional incentives don't tilt the ground in favour of machines only, or else we really will have problems we wish we didn't have (related to labour displacement). 

Q5: How can we encourage more organisations to adopt AI in their work settings? 

I want to comment on things that countries in the ASEAN region and the greater Asian region can do to help with the transition so that more firms and organisations - profit-making private sector, and non-profit people in the public sector - can transition to better ways of working. In several of these countries, there are already some government-funded groups that help with transitional issues. They help firms do their early applications. The government can't necessarily get too micro in this kind of thing, but they can create some incentives that will help companies to de-risk a little bit in their early applications. For example, what AI Singapore does with their “100 experiments” programme is a good example of trying to get a broader base of companies into their early applications with project support.  At some point, the companies participating in the 100 Experiments need to pick up the AI application effort on their own. But there's a big learning curve for getting started. And how do we create the ecosystems so that companies lacking the experience are willing to try their first one or two projects and then build the internal capabilities so they can do more of that on their own?  I think, there's great opportunity to do more of this – similar to what the AI Singapore 100 Experiments programme is doing - across and within the various ASEAN countries.