My research focuses mainly on issues of educational technology adoption by teachers, pupils, students and lecturers, as well as by employees and managers in the workplace. I am interested in understanding what makes people decide to use a certain technology, and also what prevents them from doing so. A new technology can potentially positively impact educational processes, but it is often the case that in practice there are many problems. A good example is social networks. At the beginning, educators were very enthusiastic about their ability to encourage collaboration and create connections, allowing new modes of sharing knowledge and information. But then we became aware of social networks’ ability to circulate fake information and create negative social pressures among students that may even lead to adolescent suicides.
Therefore, when examining a new technology, we must go beyond the functional aspect and take a deeper and wider look at the implications. What kind of relationships are created here between humans and the new technology? And what does it say about us as people and our society, in terms of values and ideology? I try to look at things from a more social/cultural perspective, which shapes learning and teaching. By investigating how people connect with a technology, and how this interaction impacts them, we can formulate the best approaches to integrating technology into education.
This field is naturally very dynamic. When I started exploring this issue, we looked at very simple technologies, like interactive courseware, and then we started investigating online learning environments, followed by distant learning, social networks and other Web 2.0 learning environments, and, of course, smartphones as learning tools. Today, most of the research centers on AI.
I divide my research into two types, studies that are more empirical, and studies that are more theoretical. In the theoretical studies, I develop models aimed at understanding things conceptually. One theoretical project has to do with students’ overreliance on AI. We are building a model aimed at understanding what overreliance is, and how to identify such cases. This analytical model will allow us to examine whether a certain group of students, or any other users, are employing AI responsibly or irresponsibly. We will also be able to investigate the factors affecting overreliance, whether it’s their level of expertise, their confidence in themselves, how much they trust the technology, or even how risky or effortful it feels to them. Once you know whether, how, and why they’re using AI in helpful or unhelpful ways, we can guide them when needed. In other words, by mapping out what drive overreliance, we can offer practical tips for preventing it.
As to the empirical side of my research, in one such study, we are looking at how much engineers are aware of their social responsibility when developing a new technology. Take people in the field of computer science for example. Do they consider the social and ethical ramifications of the technology they are creating while they're writing its code? To study this issue, my student is conducting in-depth interviews with various software engineers in Israeli high-tech companies. He will seek to learn about dilemmas they have faced and the influence of the social organization, such as social norms. If we find that there is a lack of appreciation for the ethical and social meaning and responsibility their work carries, this suggests that these aspects should be added to higher education programs in computer science and computer engineering.
In another project, my students examined AI’s effect on teaching. It has been proposed that AI will save teachers time in conducting routine tasks, freeing them to accomplish more complex duties, and that it might even help reduce burnout and teacher attrition. The students checked whether this is indeed the case, and found no reduction in workload, and maybe even the opposite. But, of course, it is still early days, and we will have to see if this finding holds true also in the long term.
The impact is already evident. In the educational system, and certainly in higher education, students are writing papers with it, forcing academia to contend with students submitting AI-written assignments. We have even begun to examine the reaction to this change. In a study I’m involved in, funded by the MOFET research foundation, we found that about 23% of academic lecturers resist change. They adopt restrictive policies that block or limit students’ use of AI. Around 40% take a more passive stance. They accept AI use but make almost no real changes to their teaching practices. Only about 37%, however, actively lead and encourage change. These are the lecturers who go further, reshaping their course materials, assessment methods, and the ways they engage students in AI-supported environments.
Beyond practical changes and pedagogical adjustments, the use of AI in education will differ fundamentally from all prior technologies. All previous learning technologies can be viewed as tools, but AI acts as an agent. This is a huge difference. When you treat a technology as a tool, you control the tool. When you use it as an agent, you enter into a dialogue with the technology, you create together with it, it becomes a partner. This raises many interesting questions. Where is the human’s place in the process? Is the interaction with the AI platform taking away from the human’s agency? It will be interesting to see how AI impacts education, and other spheres. What will be the impact of ChatGPT on creativity and critical thinking? And how will we deal with a reality where AI is not only creating fake news but also accelerating its spread, much like what is already happening on social media?
First, we need to identify the models of use that are emerging over time. There are AI uses that support learning and ones that are not. We need to define both types of use models to improve educational processes. Today, the focus in the field is still on positions, such as those of lecturers, students and patents, which is important, but not enough. We also have to develop responsible and effective models of use. After all, it is not just how much one uses AI, but also what one does with it. For example, one interesting model is using AI chatbots instead of people to run soft-skills training. This approach can cut costs and make it possible to reach many more learners. But the real questions are: Does it help them learn and what are the drawbacks of such solutions? In my research, I'm trying to explore such models and their effects on learning and teaching.
As AI is here to stay, research in the field of instructional technologies needs to make society aware of AI’s consequences, and to avoid or be careful of ways where it may harm us. The goal is not to scare but to raise awareness and give tools and knowhow that empower people to use AI correctly.