What’s your take on Gen AI?

I see Gen AI as just another tool in the engineer’s toolkit - nothing more, nothing less. It’s great at some things and can really deliver value when used right, but it’s definitely not some magic solution to everything.

The interesting part isn’t really the tech itself - it’s what we can actually build with it to help real users. I’ve seen it both ways - some projects where Gen AI really sped things up, and others where simpler solutions just worked better. I try to stay practical about it and focus on results instead of getting caught up in all the hype.

For me, it’s just the latest useful tech we can use to solve business problems. Tomorrow it’ll probably be something else, but the core question stays the same: How do we create actual value for users?

Are there any worries/concerns? If yes, what?

My concerns about AI products are really just the same ones I have for anything I build: Will people actually use it? How many? How often? What’s it really doing for them?

I focus mostly on the practical stuff - like how much time users are actually saving with these solutions. These aren’t really AI-specific concerns - I’d be asking the same questions if I was working with any other technology.

As for AI-specific risks? They’re not really what concerns me. I’m an engineer - I see AI as another tool we can use. Today it’s AI, tomorrow it might be VR, yesterday it was blockchain. What matters is how it makes things better for real people and businesses.

Given the rapidly evolving space of AI, how do you stay updated? You mentioned in the case study interview that you rely on a close group of friends for information as well as online communities such as Discord and Slack. Are there any other sources you subscribe to? Any industry newsletters, research papers? Case studies? Any paid content you subscribe to?

I mostly rely on a mix of my network and YouTube’s algorithm. Besides the Discord and Slack communities where I chat with friends about the latest AI stuff, I let YouTube do the heavy lifting of finding interesting content for me. I don’t really subscribe to specific channels - I just let the algorithm suggest relevant AI news and reviews.

This works great because:

  • No need to actively hunt for content
  • Get nice overviews of what’s new
  • Can listen while working on other stuff
  • See different takes on the same topics

I’m not big on social media - I actually post more than I read. For written content like newsletters or papers, I’m pretty selective and often use AI tools to get to the important bits quickly.

If you do not rely on newsletters or research papers, can you tell us why?

I find YouTube’s format more practical - even if it’s just someone talking with a static image, it’s easier to absorb while multitasking. I’ll dig into written stuff when I need to really understand something specific, but it’s not my go-to for staying updated.

What’s your preferred content medium? Textual? Audio (podcasts?) Or videos?

I use all formats, but here’s the thing - I mostly watch YouTube videos… without watching them. I hide the video player and just listen like it’s a podcast. YouTube’s recommendation system is just really good at finding stuff I’m interested in, way better than hunting for podcasts manually.

I do read text content, but it’s different - more for getting specific information. Sometimes I’ll even run it through an LLM for a quick summary. Audio (from videos or podcasts) is where I do most of my actual learning.

Can you think of an article/a social media post/ any piece of content that caught your attention in the past 3-4 weeks?

I watch a lot of AI-related YouTube videos, especially when doing routine tasks. Can’t point to specific ones though - they’re more like background content while working.

Worth mentioning that I’m probably not your typical content consumer - I’ve actually set up systems to limit my social media time. I’m more active posting stuff than consuming it.

What’s a good tech content for you? What should it deliver?

I mainly look for two different types of content. First, I want to stay on top of what’s new in AI. I don’t follow specific channels for this - I just let YouTube’s recommendations do their thing. Sure, I might get the news a bit later than others, but the format works better for me. When something really important happens, I usually hear about it first through my network of friends anyway.

Then there’s the practical side - I really enjoy seeing how people are actually using the technology. And it doesn’t always have to be serious business stuff. Sometimes the fun experiments are the most interesting, like this video I watched about using LLM for voice commands in Hogwarts Legacy. These kinds of examples often spark ideas that can be applied to more practical problems.

In the end, whether it’s industry news or use cases, what matters is that the content feels genuine and engaging. I don’t care if it’s a serious business case study or someone’s weekend project - if it shows real experiences with the technology, I’m interested.

What are the biggest challenges you face currently – could be technological or organizational? Are there any repeated blockers you notice?

Right now, I’m really focused on finding the sweet spot for investment in AI projects. Having both successful launches and projects we had to shut down due to low user adoption, I’m wrestling with an interesting question: How much effort is enough?

It’s about finding that balance - investing enough resources to properly test if an idea works, but not so much that you burn through too much money, time, and energy if it doesn’t. Some of our projects took off, others we had to close due to insufficient user interest, and each case teaches us something about this balance. I don’t think there’s a universal answer to this investment challenge - it’s more of an existential question that varies project by project.

On the technical side, there’s a recurring challenge around testing AI solutions effectively. We have the usual layers - unit tests, integration tests, and stability integration tests - but organizing this testing environment for AI is tricky. We need to verify both the stability of AI models (catching any model behavior changes) and test the entire pipeline with enough coverage to be confident it’ll work beyond just our test cases. I believe there are right approaches to this - the industry just hasn’t settled on them yet. We’re all actively working on finding these best practices.

These two challenges - finding the right investment balance and building robust testing practices - are probably my biggest focus right now.

You’re an R and D professional, what are your priorities when building an AI solution? Do they align with organizational priorities? If not how do organizational priorities influence your decisions on adopting and implementing AI?

Working with smaller startups (under 200 people) makes things pretty straightforward. I usually get something built in a few months, push it to users, and we see pretty quickly if it works or not.

This quick back-and-forth actually helps avoid most of those classic R&D vs. business conflicts. The cycle is simple: I get space to work on the tech side for a couple months, then we release it and see what happens. When something’s not working, we pivot fast. When it is working, we see the results almost immediately. The whole process moves quickly - from development to user testing to measuring impact.

I’m really more of an engineer at heart than some pure scientist - I just want to solve problems, whatever tech it takes. This mindset keeps things focused on what actually matters.

How important is building an AI team for developing a robust solution? Are there any pain-points in this regard?

I’ve been pretty lucky with the team stuff - got a solid network in the AI world. I work with a mix of full-timers and some really sharp contractors who jump in when we need them.

It’s nice because I can pull in exactly who we need, when we need them. These folks know their stuff and can hit the ground running.

Honestly? My bigger headache isn’t finding the right people - it’s making sure we’re building stuff people actually want to use. That’s where most of my brain power goes.

Though I should mention - I know I’ve got it easier than most. Some of my friends who aren’t as deep in the tech world really struggle with building AI teams. I try to help out when I can, connecting them with people I know.

How do you know an AI solution is a success? Are there specific metrics that indicate this accurately?

Look, at the end of the day, it’s all about the business impact. Doesn’t matter if you’re building for customers or your own team - the users tell you everything you need to know.

We recently implemented a system that automates certain processes for developers and analysts. It’s straightforward to measure its success:

  • How many people actually use the solution
  • How much time it saves our team

Simple stuff, really. Whether we’re talking AI or anything else, what matters is how it shakes things up in the real world. Just gotta measure those changes in ways that make sense for the business.