For my capstone project in the MS in IT program at Champlain College, I put together the following short presentation and not-entirely-short paper.
Agile definitely needs an update in the age of AI, but what will this look like? Let’s be part of shaping it.
(Also, yes, you can see my legal name at the top of the pres. Sowocki is, indeed, a nickname from smashing together my first initial and last name and saying it super fast. I answer to both, but in a crowded room, you are way better off with Sowocki.)

“Individuals and Interactions” vs. “Processes and Tools”: The Organizational and Cognitive Dimensions of Agile in an AI Age
Table of Contents
Research Questions and Methodology
Literature Review and Methodological Details
Conclusion and Next Steps: Research and Industry Recommendations
Introduction and Context
Software engineering (SE) teams which use Agile practices comprise humans, and factors specific to these humans are a significant influence on teams’ eventual success or failure (Dutra et al., 2021; Ulfsnes, 2024). In particular, cognitive factors can lead to or away from successful adoption of Artificial Intelligence (AI)-based coding tools in SE projects, with cognition defined to include mental characteristics that “correspond to the way the software developer reasons” (Dutra et al., 2021, p. 446). As a key cognitive factor, mindset has been previously discussed in terms of ideal Agile practices (“the Agile mindset”) but is less defined in the context of the ongoing transition to AI-assisted software development and the human minds this transition involves. Additionally, which human factors influence SE projects and in what ways remain understudied.
Considering cognitive factors such as mindset is particularly important given the rapid ride of AI-based coding tools which SE teams are using in myriad ways (Pereira, 2025). Farrow (2021) discusses a “growth mindset” and how it affects how staff both anticipate and adapt to AI technologies. Through empirical research, Farrow finds that staff in a range of organizational settings have a variety of reactions to the influx of AI technologies. Not all staff have the same reaction to the perceived potential that they and their positions could ultimately be displaced by AI, and understanding how these staff members view the influx of AI in various “futures scenarios” is closely linked to understanding how enterprises can respond appropriately. This includes how organizations can best support the strategic adoption of AI-based coding tools.
Research Questions and Methodology
The “growth mindset” which is key to a successful organizational AI transition may or may not align with the much-extolled “Agile mindset.” This leads to the research question: Does the “growth mindset” discussed by Farrow (2021) as a factor enabling successful organizational AI transitions correlate with the Agile mindset (Ozkan et al., 2021) which members of SE teams are encouraged to display and cultivate, and if so, in what ways? Specifically, the question is whether there are practices which organizations can undertake to bridge the two mindsets, as part of a broader restructuring to adapt to the tensions and opportunities being brought about by AI.
Examination of this research question will be guided by Dutra et al.’s (2021) suggestion for “further practice-oriented research” on human factors in SE teams (p. 442), with methodology grounded in scholarly, empirical research. Drawing on diverse scholarly sources from fields such as Software Engineering, AI and Society, Management (including Organizational Theory), and Agile Development allows for the construction of connections that lead to robust insights. Due to a lack of access to quantitative data from scholarly sources, the peer-reviewed sources which are utilized are mainly qualitative in nature (Dutra et al., 2021; Farrow, 2021; Hutzschenreuter and Lämmermann, 2024; Ulfsnes et al., 2024; Wivestad et al, 2024). From this scholarly base, it is possible to branch out and include select industry-oriented surveys and reports in order to understand how many software developers are using AI tools, and in what way, as a quantitative supplement (Shani, 2024; StackOverflow, 2023). Additionally, one non-peer reviewed qualitative source will be utilized to provide primary source illustration of concepts grounded in the scholarly research (Pereira, 2025).
Taken together, this mixed methods approach will allow for practice-oriented, theoretically grounded insights relevant to current software development practices and challenges, with emphasis on scholarly qualitative sources for conceptual grounding and direction. In several instances, insights from non-peer reviewed sources will be validated through scholarly research. In cases where commercially-oriented sources offer different insights than the scholarly ones, this will be noted and critically discussed. Additionally, attention will be paid to research gaps to inform ongoing scholarly inquiry, along with recommendations for future industry practice.
Literature Review and Methodological Details
When Dutra et al. (2021) examined scholarly literature on the “human perspective” in SE teams, they showed that a range of human factors affect SE team outcomes, but it is not yet established which factors have which effects. These authors note that further research is needed to determine how human factors such as “influence individual motivation, agile mind-set, team climate, software quality, or agile transition in traditional organizations” (p. 442) affect SE team success, with a particular need for research into cognitive factors that establish how the software developer reasons when doing SE tasks. Since the way members of Agile SE teams do these tasks is being transformed by AI-based coding tools, Farrow’s (2021) work on mindset and its effects on staff is key. Farrow pays particular attention to how staff both anticipate and adapt to AI technologies and how they understand and react to the potential that they and their positions could ultimately be displaced by AI.
Through a series of four participatory workshops, Farrow gathered data about how staff and organizations view various “futures scenarios” involving potential company AI transformations. Based on analysis of the data collected at these workshops, she emphasizes that responsibility under this “growth mindset” AI transformation framework should not be displaced onto the individual worker: “The research does not suggest that adopting the ‘right mindset’ is the only option for those who have no power in [the face of] a change,” such as the change of AI transformations (p. 904). Instead, “compassion, empathy and authenticity is [sic] required for supporting people in a positive outcome” (p. 907) – with this support coming from the organization and flowing toward the staff. Of course, staff also have responsibility to adapt under Farrow’s analysis, but the initiative and leadership for adopting a healthy growth mindset, modeling this, and enabling staff to respond in kind start with the organization.
From a broader strategic level, Hutzschenreuter and Lämmermann (2025) note that companies frequently neglect to address “fundamental strategy questions” in determining how various AI tools will be rolled out across the enterprise. They contend that if firms did so, they would be better able to “connect IT and business strategies” to form a new and powerful strategy “built on the inherent technological characteristics of digital systems” at hand (p. 2). In the case of AI systems, an effective novel digital business strategy would need to “effectively exploit AI’s potential task superiority… and proactively deal with the technology’s dynamic nature” (p. 3). In other words, such a strategy would utilize those areas where AI is more efficient than human staff, while prudently accounting for the fact that the technology is far from perfect and in fact is still rapidly developing in nuanced ways.
First-person analysis of this rapid development comes from Pereira (2025) in an Early Release Edition of the book Generative AI for Software Development. Specifically, Chapter 8 contains implementation success stories which contain opinions from developers and other innovators in software engineering. These qualitative insights are valuable as they are from individuals on the front lines of GenAI-augmented software development. This source also describes in some detail the transformation in software engineers’ adoption of GenAI coding tools from 2023 until 2025. During this time, there has been rapid progression in the underlying machine learning models of AI-based coding tools, and engineers’ adoption has been shifting in nuanced ways as a result.
Chapter 8 also includes a number of analogies to other industries that have undergone technological transformations in the past. These analogies illuminate the mindset of employees of the past, and Pereira pairs them with observations and suggestions about potentially helpful cognitive approaches that can be adopted by staff who are adapting to AI now. For instance, Pereira notes that the introduction of Automated Teller Machines (ATMs) “did not reduce teller jobs” as “more bank branches opened as a result of those lower operating costs” that were caused by the influx of ATMs (Chapter 8). Implication: Staff who are worried about being displaced by AI should relax, as more jobs in the AI version of “more bank branches” will surely be forthcoming!
Pereira concludes Chapter 8 with a discussion of how collaboration and communication have changed and are changing as a result of GenAI coding tools. He distinguishes between simple projects that can be accomplished by one person and those that require more complex teams: “Working solo… means that all of your projects’ context lives in your own brain… This extension wouldn’t be so easy in a team with multiple people who share a knowledge base… in a larger team with existing processes and code, adopting AI tools to generate code has some added nuances” (Chapter 8). Here Pereira indicates that there is a need for robust collaboration tools for teams in larger enterprises “who share a knowledge base” but may not interact on a regular basis because success in adopting AI-based coding tools hinges crucially on a common understanding of project context.
Shani (2024) examines just such large enterprises when describing a 2023 survey of “500 U.S.-based developers at companies with 1,000-plus employees” (para. 1), jointly conducted by GitHub and Wakefield Research. These survey results offer commercially oriented data points on the extent to which US developers are using AI in large enterprise environments. Shani notes that 92% of U.S.-based developers used AI in 2023 either in their work, outside of it, or both – yet the developers who were surveyed wanted collaboration to be a bigger component of their performance reviews and not narrower technical metrics like how many lines of code they wrote (or generated) or how many issues they helped resolve.
Strikingly, 80% of developers believed that AI would ultimately make their team more collaborative, a belief that is at odds with academic research on the subject (Ulfsnes et al., 2024; Wivestad et al., 2023). Further research on this point of tension is needed, particularly in the Agile SE content. Also striking was that many developers expressed a desire for a reliable way to receive direct feedback from end-users about the software features the developers had had a hand in building. In the Agile SE environment, it is worth considering why developers may struggle with getting such feedback given Agile’s emphasis on prioritizing customer collaboration in small, self-governing teams.
StackOverflow (2023) conducted another survey of over 89,000 developers in 185 countries, with results indicating that 96% of developers who were already using AI tools as of 2023 did not see great potential for those tools to assist them in collaborating with teammates. Yet nearly 30% of those who described themselves as not yet using but “interested in using” AI tools at some point in the future were interested in such collaborative functions; and over 41% of those who were uninterested in using the tools at present thought that AI coding tools could have collaborative potential. This suggests that early adopters of AI-based coding tools were less interested in collaboration, but later-adopters, as well as skeptical non-adopters (or not-yet-adopters), were more open to the tools as a potential boon to team collaboration. Indeed, it is possible that those who rated themselves as not interested in using AI would be more open to such use if its collaborative potential were more fully and deliberately engineered and realized on the part of the tools’ developers and organizational implementers.
It must be noted that this StackOverflow survey shows discordant results with the GitHub survey (Shani, 2024) regarding AI tool’s potential to enhance team collaboration. GitHub found that the vast majority of developers thought that AI tools could help with team collaboration, while StackOverflow found that most developers thought the opposite. This indicates that how “most” developers view AI-based coding tools effects on team collaboration is simply not clear. It is also important to emphasize that both of these surveys were commercial in nature and utilized convenience samples, and neither the surveys’ methodology nor their findings were peer-reviewed. As such, they offer provocations that could be further investigated and validated through scholarly research but few definitive answers. For instance, it is possible that developers are sensitive to how questions about potential collaboration effects are asked or that there are nuances among sub-groups of developers (something the StackOverflow survey results certainly hinted at).
On the scholarly side, Ulfsnes et al. (2024) conducted an “exploratory multi-case study” (p. 4) using snowball sampling of developers who were actively using AI coding tools in Agile SE teams. Their analysis of this study begins by noting that “there are currently no standards or norms for how, when, and for what purpose you should apply GenAI to [sic], and employees in software-intensive organizations are using it based on their own preferences” (p. 5). Based on their empirical results, the authors find a lack of standards could be both good and bad for developers’ collaboration practices as well as their workflow and overall rhythm of their days: “An increased reliance on tools like GenAI may enhance individual productivity while inadvertently reducing inter-team interactions, ultimately affecting long-term [developer] job satisfaction and collective productivity” (p. 3).
Accounting for this trade-off is described as a careful balancing act, and grasping developers’ mindset in how they used the GenAI tools is key to understanding how to negotiate and achieve such balance. For instance, there is growing attention to the practice of prompt engineering among developers in order to “get” GenAI tools to write usable code, code which is then ideally adapted and verified before being integrated into the larger project code base. In Ulfsnes et al.’s study, developers describe this practice of prompt engineering “as feeling similar to programming with a partner” (p. 11, emphasis added). This “similar feel” effect causes developers to engage in less pair programming as they become more attuned to specific practices which can prompt the GenAI tool to respond as they wished with the desired code. This feel of similarity is all the more significant given that prompt engineering is increasingly replacing previous “rubber ducking”1 practices that programmers would enact by explaining ideas to a peer (animate or inanimate), who takes on the role of the non-judgmental “listening” rubber duck. In other words, the GenAI becomes the sounding board to which the programmer would explain their ideas using the same kind of natural language that they would have previously expressed to a listening peer, whether human or duck.
Does this mean that prompt engineering ought to “count” as collaboration in an evaluatory or prescriptive sense? None of the developers interviewed by Ulfsnes et al. make the case for this, but it is notable that in practical terms, replacing the practice of asking their peers for input and advice is short-changing the developers on some of the very factors that they themselves consider as key for a meaningful and rewarding work environment. While developers are wont to gripe about interruptions from their colleagues “negatively influencing [their] ability to focus or work as planned… being able to help a co-worker [is] generally considered positive and rewarding” (pp. 2-3). What are the costs of missing out on such positive and rewarding feelings from “being able to help a coworker” because that coworker is now bringing all of their questions and confusions to GenAI? Are there ways that that coworker could have asked for help from their colleague without interrupting them in that negative (and socially sanctioned) sense that disrupted their “flow”?
Wivestad et al. (2024) look closely at some of the costs of missing out on interaction and collaboration as a result of increasing use of AI-based coding tools and reach some provocative conclusions. These authors examine Agile development practices among SE teams but focus on users as well as non-users of GitHub’s AI-based Copilot feature in a large Norwegian public sector organization. Similar to Ulsnes et al., the authors find that developers who use GenAI (here, Copilot) report less dependence on their colleagues to answer code questions as compared to non-users. The Copilot users are sanguine about these developments, but the authors note that such “independence” from team members could be a double-edged sword: While the developers report less frustration and more ability to focus on less mundane (and more interesting) tasks as a result of using Copilot, this comes with troubling implications for “interdependence and team unity” (p. 127). On a larger team level, this shift in whom developers tend to depend on “could pose extraordinary disruption for agile teams and Scrum Masters, whose main tasks are to facilitate collaboration and remove team obstacles” (p. 127).
Based on these results, the authors contend that the rise of AI coding assistants will require “a fundamental shift in managerial planning and execution” of Agile SE projects (p. 123). Importantly, the authors define “managerial” here in a broad, Agile way that includes both the Agile Scrum master as well as developers who must take on (and be responsible for) certain managerial responsibilities within a self-governing Agile Scrum team. Under conditions of such a fundamental managerial shift, Wivestad et al. contend that overuse or misuse of Copilot as the facilitator of (human-AI) collaborations and remover of obstacles could cause human users to become isolated on an “island of joy” – ever more dependent on Copilot (and on the surface, happily so), but less meaningfully integrated into their human teams.
Preliminary Findings
The main preliminary finding from this research is:
- The Agile mindset itself, as well as how it is popularly communicated and taught, will need to evolve as AI coding tools become more widely prevalent and relied upon by SE teams. (Preliminary finding #1)
Such an evolution is necessary because it is unrealistic to expect that the adoption of AI-based coding tools (which, after all, is already very much underway) could be effectively molded to occur in a way that conforms to historic Agile principles. After all, when Wivestad et al. (2024) note disapprovingly that Copilot users are delighted at their newfound “independence” from their human colleagues, they justify this disapproval by contending that their concerns are about the tools’ implications for “interdependence and team unity” (p. 127) – surely valid concerns in a world in which even the keenest adopters of AI-based coding tools acknowledge that “in a team with multiple people who share a knowledge base… with existing processes and code, adopting AI tools to generate code has some added nuances” which require team members to effectively work together (Pereira, 2025, Chapter 8). But Wivestad et al.’s concern is actually for how such interdependence and team unity are enacted according to their present understanding of Agile principles and of the Agile environment. If Scrum Masters and Agile teams writ large find that old ways of enacting Agile are not fit for purpose to assist AI-powered teams to collaborate and remove team obstacles, teams might well choose to approach this shift itself in an Agile way – and this is the needed evolution that will make Agile fit for purpose in the AI age.
Because of the organizational complexity involved in the AI transformation, an effective Agile evolution would need to have buy-in from the very top of the organization and thereby implicate firm strategy. Hutzschenreuter and Lämmermann’s (2025) analysis is helpful when considering this enterprise-wide strategic level, but their work does not touch on how an organization can communicate with and train the human staff who are often less efficient than AI tools at certain tasks, yet still very much needed to fill in gaps where these tools fall short. When these authors contend that a successful digital-AI transformation strategy “systematically bundles and exploits self-learning technologies to achieve individual firm goals” (p. 3), the term “self-learning” refers to the capabilities of the AI systems. Yet insofar as human staff across the company continue to work closely with these AI systems, humans must also become sufficiently dynamically self-learning in order to adapt to work efficiently and effectively with those AIs. For instance, Pereira (2025) describes at length how the Shopify SE team’s processes have transformed as a result of AI-based coding tools:
These tools allow the Shopify team to move from a fluid process of writing code to a process with clear separation between planning the implementation (which goes into the prompt) and actually implementing it (most of which the tool does)… [this] exemplifies the changes I’m seeing in my own work as a CTO and in the descriptions I read from high-performing engineering teams. (Chapter 8)
These changes, while welcome on the part of the Shopify SE team, did not occur without significant modifications to internal SE processes, including how teams learn and govern themselves. Notably, the change in how the Shopify SE team works is here distilled through the intermediary of a CTO (Pereira, the book author), who then can go on to communicate about this shift up the organizational value chain, including its necessary implications for organizational strategy in terms of a healthy organizational growth mindset. This leads to this work’s second main preliminary finding:
- “Self-learning” for humans implicates human mindset in ways it does not for AIs; productive “self-learning” in an organizational AI transformation therefore requires a growth mindset at all levels for human organizational actors, starting from the top of the organization and proceeding down to the level of individual employees. (Preliminary finding #2)
I would further extend the authors’ analysis of how “firms must be clear about what the [AI] technology can do and what it cannot do in order to strategically succeed in organizational change” ( Hutzschenreuter and Lämmermann’s , 2025, p. 16) beyond how firms communicate about this among their executive teams and in their (internal and external) organizational strategy materials to how they communicate with staff who are on the front-line of enacting the firms’ strategies. Indeed, how firms “proactively deal with the [AI] technology’s dynamic nature” requires a true growth mindset, from the highest executive level down to all staff whose work is being transformed (Hutzschenreuter and Lämmermann’s, 2025, p. 3).
This brings us squarely back to the question of employees’ mindset, and how firms enact “compassion, empathy and authenticity… for supporting people in a positive outcome” (Farrow, 2021, p. 907). Firms’ adoption of such an empathetic and authentic approach could help address Hutzschenreuter and Lämmermann’s (2025) concern that as “contemporary AI redesigns organizational structures and job markets, high levels of human resistance and inertia will occur” (p. 16). Some level of human resistance may well be inevitable, but in the third preliminary finding of this work:
- What could morph into intractable resistance could well become powerful co-design if firms understand employees’ mindset and work with and not against staff in adapting to the AI transformation. (Preliminary finding #3)
Indeed, what Hutzschenreuter and Lämmermann (2025) refer to as the need for firms to “appropriately handle black box perception” – that is, not to let end users and the public have the sense that AI-powered tools produced by the company are “black boxes” that no one can check or understand – could be better addressed when members of Agile SE teams (who after all work most closely with many emerging AI tools) are made part of the organizational strategy that aims to connect firm IT and business strategies. But this cannot happen without due attention to these employees’ mindsets.
On a more technical level, there is already evidence of a discrepancy in how software developers view successful collaboration vs. how others view it, including those responsible for signing off on developers’ performance evaluations (Shani, 2024). In the short term, tensions being brought about by the AI transformation are likely to exacerbate this gulf. As such, stakeholders in SE projects – including, critically, upper managers in the firms sponsoring and hosting the projects, who are responsible for setting company strategy and direction – would do well to consider whether communication about a SE task between a human and an AI actually counts as collaboration. If not, they ought to ask why not; but if they hold that it is, they should ask which human-AI encounters qualify as collaboration
Does any AI-based auto code completion count as meaningfully collaborative? It seems unlikely that the developers who wish for more meaningful attention to their collaborative skills would simply want their collaboration metrics to come down to how many times they hit “tab” to have Copilot auto-complete some code. Ought only humans who work with agentic AIs to be able to count that as effective collaboration? Or should it come down to whether there is a healthy balance between human-AI and human-human interactions – and if so, what does that look like? Is there a case for allowing for “islands of joy” on the Agile SE team when strategically (and cognitively) prudent, but ensuring ongoing check-ins with those on the islands until suitable bridges can be (re)built? In short, discussions of tensions and points of demarcation as to what “collaboration” is and is not must be made explicit and revisited frequently in a suitably Agile way.
In sum, to answer such questions in a productive way, members of Agile SE teams, as well as the organizations which they are part of, must learn to lean on the “growth mindset” identified by Farrow (2021) in order to adapt to the disruption being brought about by AI coding tools (Ulfsnes et al, 2023; Wivestad et al., 2024). Done well, there is potential that such an approach could lead teams back to core Agile principles, like prioritizing customer satisfaction and meaningful direct engagement with end user-customers. Indeed, the AI transformation writ large could be the ultimate test of Agile philosophy and of the original Agile Manifesto.
Conclusion and Next Steps: Research and Industry Recommendations
These preliminary findings point to a number of future research and industry practice recommendations. The first research question reaches back to the fundamentals of the original Agile Manifesto:
- What does it mean to value “individuals and interactions over processes and tools” (Beck et al., 2001, para. 2) when there is a tool that is itself an encapsulation of a very specific, artificial type of technical interaction? (Suggested future research question #1)
This question hints at a fundamental skepticism as to whether Agile philosophy as we currently understand is the right heuristic to inspire and guide the adoption of AI-based coding tools on SE teams. Do the assumptions of the Agile Manifesto need conceptual revamping to be fit for purpose for the AI age? Teasing out answers to this question will involve grappling with whether the meaning of prioritizing “individuals and interactions,” as articulated in the Agile Manifesto, can and should expand to encompass AI agents, copilots, and the like – or whether such entities ought to be relegated to the category of “tools” which are necessarily less valued according to core Agile values. Indeed, different assumptions about the correct answer to this question could well be at the core of the present disconnect regarding what “good” team collaboration means or ought to mean on Agile SE teams.
Next is a recommendation for research methodology:
- Farrow’s (2021) model of participatory workshops should be adapted and extended to focus specifically on Agile SE teams. (Suggested future research methodology)
Such an extension would allow for a more deliberate sampling of both users and non-users of AI based coding tools in a range of organizational contexts; scholarly analysis of the results could then be compared with and validated against industry surveys of both the qualitative and quantitative varieties. Farrow’s workshops were broadly structured to elicit employee beliefs about fixed vs. growth mindsets, as well as how they understood future scenarios where jobs could be threatened by AI. This latter question would have to be tailored to developers, who are certainly not immune to anxieties about losing their jobs to AI but due to the nature of their jobs are likely to react differently to this prospect as compared to a more general workforce sample.
The next suggested research question involves taking preliminary findings from this project and drilling down into relevant differences in the actual coding tools used. As Pereira (2025) notes, not all AI-based coding tools are created equal, and rapid progression has been the name of the game: “Once Cursor IDE arrived on the scene… the transformation [as compared to previous AI-based coding tools] was striking. The new tool cannibalized both of its predecessors, including both autocomplete and chat inside the IDE” (Chapter 8). Since AI-based coding tools are likely to continue to develop swiftly in the coming years, companies will need to adapt accordingly by considering how different tools affect Agile team collaboration patterns. The question is therefor
- Which AI-based coding tools lead to more meaningful Agile team collaboration, and are there any other mediating factors in this influence (e.g. growth mindset among team members, team size, industry, manner in which a tool is chosen or deployed across the organization, or others)? (Suggested future research question #2)
Note that this question assumes that “meaningful collaboration” has first been defined in a way accepted by software developers as well as other stakeholders, including upper management who define (and enact) the organization’s strategic direction.
This research question leads to the first and only industry recommendation:
- Companies which are deciding which AI tools, and in particular AI-based coding tools, to adopt should consider effects on team collaboration, unity, and cohesion in their assessments. (Industry practice recommendation)
A company may well find it acceptable to adopt a cutting-edge AI-based coding tool even if it has deleterious effects on team collaboration and unity if such adoption makes it possible to trounce the competition in some significant way, but firms ought to be aware of this trade-off and monitor its effects as they are unfolding.
References
Beck, K., Beedle, M., van Bennekum, A., Cockburn, A., Cunningham, W., Fowler, M., Grenning, J., Highsmith, J., Hunt, A., Jeffries, R., Kern, J., Marick, B., Martin, R. C., Mellor, S., Schwaber, K., Sutherland, J., & Thomas, D. (2001). Manifesto for Agile
Software Development. https://agilemanifesto.org/
Dutra, E., Diirr, B., & Santos, G. (2021, September). Human factors and their influence on software development teams-a tertiary study. Proceedings of the XXXV Brazilian Symposium on Software Engineering (SBES ’21). Association for Computing Machinery.
New York, NY, USA. 442–451. https://doi.org/10.1145/3474624.3474625
Farrow, E. (2021). Mindset matters: how mindset affects the ability of staff to anticipate and adapt to Artificial Intelligence (AI) future scenarios in organisational settings. AI & society, 36(3), 895-909.
Hutzschenreuter, T., & Lämmermann, T. (2024). What Is Your AI Strategy? Systematically Integrating Self-Learning Technologies into Your Business Strategy. Academy of Management Perspectives, 00(00), 1-24. https://doi.org/10.5465/amp.2023.0243
Ozkan, N., Gök, M. Ş., & Köse, B. Ö. (2020, September). Towards a better understanding of agile mindset by using principles of agile methods. In 2020 15th Conference on Computer Science and Information Systems (FedCSIS) (pp. 721-730). IEEE.
Pereira, S. (2025). Generative AI for Software Development: Early Release Edition. O’Reilly Media, Inc.
Shani, I. (2024, February 7). Survey reveals AI’s impact on the developer experience. GitHub Blog.
https://github.blog/news-insights/research/survey-reveals-ais-impact-
on-the-developer-experience/
StackOverflow. (2023). 2023 Developer Survey.
https://survey.stackoverflow.co/2023/#section-developer-tools-
Ai-in-the-development-workflow
Ulfsnes, R., Moe, N. B., Stray, V., & Skarpen, M. (2024). Transforming software development with generative AI: empirical insights on collaboration and workflow. Generative AI
for effective software development. Cham: Springer Nature Switzerland. 219-234.
Wivestad, V. T., Barbala, A., & Stray, V. (2024, June). Copilot’s island of joy: balancing individual satisfaction with team interaction in agile development. International Conference on Agile Software Development. Cham: Springer Nature
Switzerland. 123-129.
- “The idea of rubber-ducking is to explain the problem one seeks to solve to an inanimate object (e.g. a rubber duck), in an attempt to achieve a deeper understanding of the problem and a potential solution through the process of explaining it to someone (or something) using natural language” (p. 6). In this context, programmers were noting that they had previously been more likely to use their pair programming partner for a kind of rubber ducking (the partner of course is animate and not a rubber duck, but was being used as a sounding board for their partner’s developing ideas; the person using their partner as a rubber duck-sounding board would then be expected to reciprocate by taking on the rubber duck role if and when requested by their partner.)
↩︎