Esteban Elia, Head of AI and Data Science, MODO

Esteban Elia, Head of AI and Data Science, MODO
Esteban Elia is the Head of AI and Data Science at MODO, a technology enabler for bank payments serving 16 million users in Argentina. He leads the company's AI-first transformation, shaping strategy, operating models, and culture. He also teaches executive education at MIT, including the CTO Professional Certification program and courses in Generative AI and AI Leadership.
Most companies say they're "adopting AI." It sounds great in a board deck. It photographs well on a strategy slide. But peel back the label and you'll find the same thing almost every time: a pilot here, a task force there, and the rest of the organization working exactly the way it did in 2017. Decoration dressed up as a transformation.
At MODO, we took a different path. We are a technology enabler for bank payments, serving 16 million users in Argentina. We asked an uncomfortable question early enough for the answer to actually matter.
What work are we doing today that probably shouldn't be done by a human tomorrow? That question changed everything.
There's a comforting illusion in treating AI as a project. Projects have timelines, milestones, and a nice slide at the end. AI doesn't respect any of that. It doesn't stay inside IT. It doesn't wait for your two-year roadmap. It seeps into daily decisions, small tasks, and how people prioritize their mornings. When a company tries to contain AI inside a lab or a special team, what it's really doing is protecting the status quo. Keep everything the same, add a shiny layer on top. That almost never works.
We understood early that AI had to become an organizational capability. A shift in how work gets done, what tasks are tolerated and which decisions get questioned.
Here's the thing about AI that makes it different from previous waves of technology. It's reshaping what it means to add value as a person inside a company. We used to ship things that were "good enough" and call it an MVP. Now "good enough" is reachable by anyone with access to an agent and sufficient prompting skills. The new bar for human contribution is creativity, judgment, and the ability to ask the right questions. Everything below that bar is getting commoditized fast.
AI adoption doesn't fail because people don't want to use it. It fails because they don't know how to, when to, or if they should. You'll find fear of change, fear of being replaced, and a lot of people who simply don't know where to start. These are real barriers, and no amount of tooling fixes them. For the cultural shift to happen, you need a clear top-down message. At MODO, leadership framed AI as a strategic enabler, a different way of organizing work. That framing wasn't technical. It was cultural. And it unlocked everything that followed.
AI had to become an organizational capability. 
Leadership sets the direction, the boundaries and the risk tolerance. But the actual use cases come from somewhere else entirely. From people deep in the operation. The ones who know the shortcuts nobody documented, the tasks everyone does but nobody questions.
We decided that the use cases should be bottom-up, so we created an AI Guild. A cross-functional group of senior people from across the organization. Senior enough to have authority, close enough to the work to know where it was broken. They had explicit permission to challenge what had always been done a certain way, and the credibility to get people to actually listen.
The Guild needed to build real inertia before anything else made sense. We started by curating a list of strong candidates for AI use cases, debated when and where it made sense to move, and executed. The early wins were convincing enough that skeptics started asking to participate instead of watching from the sidelines.
Only after AI had gained some traction with real use cases, we move to build a Center of Excellence. The CoE was the scaling mechanism, never the starting point. Its purpose was to industrialize AI by rethinking how we do things, starting with how we build products and how we do engineering. Centralizing criteria, security, privacy, architecture and compliance, so that what was already working could keep working at scale.
Think about what happened with the internet. When it first showed up, everyone said they were "online." Online strategy, online presence, online this and that. If you had shut it down back then, business would have continued. Nowadays, nobody says they "use the internet" because it's embedded into everything. Today, if the internet goes down, the whole world freezes.
AI is still in that early phase. If AI goes down today, everything still works. But it won't stay that way for long.
During this transformation, our tech stack changed several times. We tried tools, replaced them, tried others. That's all part of the journey. But none of that turned out to be the interesting part. The stack is just how you do it. What actually matters is the shift in mentality, understanding where AI is heading and reorganizing your work around that trajectory. We always keep humans in the loop, but increasingly as high-level orchestrators rather than low-level executors.
There is no finish line here. Becoming AI-native is not a destination you arrive at and celebrate. It's a permanent change in how you think about work. We are building for the moment when AI is as invisible and as essential as connectivity is right now. And the only thing we know for certain is that standing still is no longer an option.
I agree We use cookies on this website to enhance your user experience. By clicking any link on this page you are giving your consent for us to set cookies. More info
1 month ago
38
