April 29, 2026

Your Stressful Colleague Can Age You Faster Than You Think with Chuck DeVries

Your Stressful Colleague Can Age You Faster Than You Think with Chuck DeVries
RSS Feed podcast player badge
Apple Podcasts podcast player badge
Spotify podcast player badge
Amazon Music podcast player badge
Overcast podcast player badge
Castro podcast player badge
PocketCasts podcast player badge
RSS Feed podcast player iconApple Podcasts podcast player iconSpotify podcast player iconAmazon Music podcast player iconOvercast podcast player iconCastro podcast player iconPocketCasts podcast player icon

What does a peer-reviewed study on aging have to do with your workplace culture — and what does any of it have to do with a dog coding a video game? In this episode of Uncover the Human, hosts Cristina Amigoni and Alex Cullimore are joined by returning guest Chuck DeVries, a self-described "explorer in the lens of whimsy," for a wide-ranging conversation that connects cutting-edge science to everyday leadership. They dive into new research showing that toxic relationships don't just feel bad — they literally accelerate aging at the DNA level, with "hasslers" in your personal and professional life acting as biological risk multipliers. The conversation explores how this plays out on teams, in organizations, and even in our own nervous systems, and what leaders can actually do about it.

From there, the trio turns to AI — its promise, its risks, and the very human questions it forces us to confront. Chuck offers a grounded, nuanced take on how companies should think about integrating AI without hollowing out the human value that makes businesses worth building in the first place. He draws unexpected parallels between AI disruption and the discovery of fire, challenges the idea of universal basic income as a band-aid solution, and makes a compelling case for keeping the customer — not the algorithm — at the center of every decision. And just when you think it can't get any more interesting, the conversation ends with a dog who coded a playable video game using Claude. Seriously.

Links:

Chuck Chats: https://www.youtube.com/@ChuckChatChannel

Studies mentioned:

Dog builds video game : https://www.calebleak.com/posts/dog-game/

Negative social ties as emerging risk factors dor accelerate aging, inflammation, and multimorbidity: https://www.pnas.org/doi/10.1073/pnas.2515331123.

Credits: Raechel Sherwood for Original Score Composition.

Links:
YouTube Channel: Uncover The Human

Linkedin: https://www.linkedin.com/company/wearesiamo

Instagram: https://www.instagram.com/wearesiamo/

Facebook: https://www.facebook.com/WeAreSiamo

Website: https://www.wearesiamo.com/

00:01 - Teaser And Return Guest Setup

01:20 - Meeting Chuck And Leadership Energy

04:16 - The Study On Negative Ties

08:13 - Hasslers And Team Performance Collapse

12:05 - Boundaries And Reducing Chronic Stress

19:19 - AI Amplifies Negativity And Risk

24:19 - Jobs, The Social Contract, And Unrest

29:37 - Scaling With AI Without Losing Humans

33:57 - Fire As A Tech Inflection Analogy

34:53 - A Practical Plan For AI Adoption

41:06 - How A Dog Coded A Game

44:04 - Where To Find Chuck Online

46:35 - Authenticity And Closing Thanks

47:49 - How To Reach The Hosts

[INTRODUCTION]

"Chuck DeVries: You can give the operating parameters to the AI, but it doesn't mean that you don't need somebody who understands coding specifically around how are we going to scale this? What's the interactions between the various systems?" 

Alex Cullimore: Hello, Cristina. 

Cristina Amigoni: Hello. Two days, two podcasts, two guests. 

Alex Cullimore: Two weeks of two-day two podcasts. And both weeks we've had returning guests. 

Cristina Amigoni: Wow. And two guests. 

Alex Cullimore: Yeah. So many guests. So many returns. 

Cristina Amigoni: And it's Monday and Tuesday. I don't know what we're doing next week, but there's something missing on our calendar. 

Alex Cullimore: We just had our return guest, Chuck DeVries, who came back to talk once more about humans and about AI in that order. It was another very fun conversation. And Chuck always has a good way of distilling what's kind of complicated and sometimes stressful about things like AI, and what can be used with that information, both in terms of leadership and for companies. So, a fascinating conversation. And as always, just a fun one. 

Cristina Amigoni: Yes. Yes. And I always learn things that I never even thought existed. And they're always very useful. So it's not just whimsical random stuff. 

Alex Cullimore: Yeah. We'll leave you on a teaser. Dog video games. 

Cristina Amigoni: Yes. 

Alex Cullimore: You'll get to it. 

Cristina Amigoni: Exactly. After the humans, after the AIs, the dogs saved the day. 

Alex Cullimore: Please enjoy. 

Cristina Amigoni: Enjoy. 

[INTERVIEW]

Alex Cullimore: Welcome back to another episode of Uncover the Human. Cristina and I today are joined by a return guest, Chuck DeVries. Chuck is our guru who sits on the middle of the Venn diagram between tech and humans, and pretty much fills in that whole gap.

Cristina Amigoni: And both sides as well. 

Alex Cullimore: Yeah. 

Cristina Amigoni: Yes. Nice to have you, Chuck. It's an unofficial Chuck Chat. 

Chuck DeVries: It is a pleasure to be here for sure. And I'm not sure whether I'm honored by that intro or whether I'm a little mildly concerned. It probably is correct. I've been described as sort of a bot by myself or Chuck GPT at times. And I asked, how should I introduce myself to my family this morning? And my daughter immediately piped up with an explorer in the lens of whimsy. I'm like, "Oh, I like that. That's really good. I want to go with that as my –" I'm a lifelong learner and builder of things, and occasionally useful tech person, and somebody who tech works around instead of fights, which I thought that was also good. 

Cristina Amigoni: That is a great introduction. 

Alex Cullimore: I should have asked you if your daughter had any introductions. 

Chuck DeVries: Always let your family come up with things like that. Those are all things I wouldn't come up with on my own. 

Cristina Amigoni: I know. I'm a little bit scared to ask my kids how they would introduce me. 

Chuck DeVries: It's very eye-opening. One, because they see through all the layers that you build up around yourself. I remember, and this was one of those things that caused me to do some introspection at the time, I was informed by my daughter that, and she was very small at that time, that she thought that my job. And she told her friends that my dad's job is to yell at people. 

Because at home, on the other side of things, and at that point in time I was on a lot of outage calls and those sorts of things. And so, well, that's not really how I want to show up. And it's certainly not how I want somebody to describe my job. I had to think about that. How do I show up now? Now she's pretty sure that I'm just an uber nerd. 

Cristina Amigoni: Yeah. Unfortunately, that's probably how a lot of people assume leaders are or people with titles. Let's not call them leader. People with titles are. That's their job is to yell to a lot of people. We've never heard you yell, so we can't say that that's true. 

Chuck DeVries: When I actually do yell at somebody, I have one friend who's like, "I've only ever seen you actually get mad twice, and each time I was scared, and I just did what you asked because it was – obviously, it's not a normal behavior for you." I would much prefer to be quiet, calm, and collected. And my job is to do that. I think as a leader, that's part of your job is to absorb some of the noise and then help to filter. But yes, at times it could come across. Even if you're telling somebody that they need to improve in something, it's how they hear it, too. It's not necessarily that you're actually yelling. It's how they hear it. 

Cristina Amigoni: Yeah. Somehow, the shaming and the belittling doesn't quite work. It's not a huge motivator. 

Chuck DeVries: That is a terrible model for any sort of scale. Because, yeah, people will do what you do when you're standing there looking. But as soon as you leave, they're not going to anymore. You got to build a culture that allows people to be their best selves all the time, not just when you're watching. 

Alex Cullimore: Yeah. 

Chuck DeVries: Otherwise, that's your gig. 

Alex Cullimore: Yeah. If you can only be yourself in the good times, it's going to be really hard. It actually brings up something that we were talking about just before the recording. You mentioned that you found a study that talked about the – I'm going to let you say the title of it because it's worth the read. 

Cristina Amigoni: Yes. 

Chuck DeVries: All right, I will read it out here, but I'll start off with the it's basically negative social ties as risk multipliers, right? The big and official fancy title for the paper itself is the negative social ties as emerging risk factors for accelerated aging, inflammation and multimorbidity. Aside from being a mouthful, it is basically saying when somebody is a terrible influence or they cause you a lot of stress, they actually literally make you older. 

Cristina Amigoni: Yeah. Tying it to yelling at people. 

Chuck DeVries: Right? Don't be that guy, or that cow, or that person, or that dog, I guess. I don't know. If it adds stress. They go through a number of different things. And this one, it's a full released peer-reviewed study. A lot of the studies in this space are around positive psychology. And there's a couple of different things that I find particularly interesting about this particular one. 

A lot of times when we look at the different positive pieces, we assume that just an absence of those positives must be a negative. But one of the things that's interesting about this paper is if you think about that as like a graph of connections, where you're connected to a bunch of humans. You have positive connections, you have negative connections, you may have neutral connections, they all have different weights that are going to pull you in different directions. 

And this actually measures that negative side and it gives you a much more scientific – not that everybody's going to do their own DNA chain and say, "You, my aunt, uncle, or person who's near to me are causing me this much. You take 1.5 years off of my life. And I have the science to prove it. That's probably a little over the top. It might be okay in my house, but probably not in most households." But it is gives you a framing to be able to put some of those things together. 

Cristina Amigoni: I would love for that though to be done in companies where you get the yelling boss and be like, "See? Getting yelled by this person is doing this. So, can we provide some leadership development?" 

Alex Cullimore: An AI chip that goes into your brain, and it'll just read it constantly. And you'll just see it ticking down your expected life. 

Chuck DeVries: The statement that I have often used to phrase that, and this shows up in many different ways, both positively and negatively, is that you have that concern of don't let who you are be so loud they can't hear what you have to say. If you're always coming across as harsh and as negative and cutting, people will tune you out, right? They won't hear what you actually have to say. And the same thing could be if you're too goofy, if you're too – I tend to lead with humor. So if I lead with too much humor, they may not hear what I have to say either from a leadership or growth point of view. 

Cristina Amigoni: So I have two questions because, well, we didn't do homework, and we didn't read the study. One is, do they explain in the study what constitutes a negative relationship, a negative social contrast? 

Chuck DeVries: There's a variety of different ways, and it's all the stuff that you would necessarily think of it, right? Whether it's abusive, whether it's any of the social determinant things that you would think of. If there's a positive, there's a negative on it, you know. So, it could be regular abuse, could be verbal abuse, could be situational, could be poverty. Any of those different pieces can be a potential drain and how they're how they're looking at some of that. 

The way that they measure the output and the input is across a variety of different studies. So, the way that they collected this data is not just that they didn't like go around with it. Do you feel abused at home? Would you let us take a DNA chain? That's not the approach. They took a bunch of – as social psychologists normally do, they have all these different studies, and they correlated a bunch of these different pieces in places where people didn't know they were being studied or they didn't know that that was what they were studying. They're very sneaky people, those social psychologists. 

Cristina Amigoni: Fascinating. 

Alex Cullimore: They just ask their wives and daughters how to introduce them. 

Chuck DeVries: If you're ever at a study and you're participating in one, and they're asking you to put coins into a slot that doesn't work, chances are they're not actually testing you on your dexterity. 

Cristina Amigoni: Or your luck to see if you actually win anything. 

Chuck DeVries: Yes.

Cristina Amigoni: I forgot my second question. 

Chuck DeVries: I have that effect on people. 

Alex Cullimore: I remember seeing a similar study on that based on teams, and it was that there were negatively social people on teams have an outsized impact. Basically, a team will do well – it will do fine as long as there are neutral and positive people. But the second you introduce even one negatively social person who has antisocial behaviors as they say, not in the like hiding away from people, but in the not helping the collaboration type sense, it deeply impacts the performance of a team very quickly. It only takes that one. And it's not like, "Oh, you have to have a majority." You just have to have the influence. 

Chuck DeVries: Yeah. If you've been a leader and you've managed teams, and you've run into one of those people, this is where the addition by subtraction thing comes up, right? A lot of times that person is the brilliant jerk in that team, and you're like, "Well, we can't remove them. They're the only person who knows da-da-da-da-da." Pick whatever your thing is. 

But you find when you do remove them, everyone else now can breathe. They step up. They take more space, and they bring it through. Actually, in the study, they refer to them as hasslers. The people who are causing that stress or distress are referred to as a hassler. So by looking at the measure of what the hassler does to whomever. And it is the network effect. To Alex's point on the teams, it's not just one person. It is all of the interacting nodes around that person that you can see that affect them. 

Cristina Amigoni: How contagious is it besides the impact on all the nodes? How contagious is negativity and the multimorbidity? 

Alex Cullimore: 3.7. 

Chuck DeVries: I was going to go with 7.2. Precision versus accuracy.

Alex Cullimore: Different units. 

Chuck DeVries: Yeah. Ane of them is actually an astrological measure, and the other one is a physical measure. The way that they come together – the way to think about it is like a compounding thing, right? As you look at different risk measures or positive measures, you're going to have different things that interact with one another, and you're going to get a multiplier effect. 

Similar to if you have – I know this is a weird example, but roll with me. This is how my head works. If you're putting very salty cheese in a dish and you also add salt, you can suddenly find yourself with, "Okay, very reasonable, or very small amount of salt, and some fantastic cheddar." But now you're tasting it afterwards, and you're like, "Oh my god, I'm licking a salt lick here. What happened?" The additive effect actually is acted as a multiplier. And so sometimes that's a good thing, and sometimes that's a bad thing. 

I would argue, if you're making burgers, and you got a little bit of salt on the meat and you got a nice cheddar on the top, that's an additive thing. But if you were trying to make a cheesy sort of dessert, you could have turned it into something awful gush that no one wants to consume. And so, the similar sort of math happens on the waveforms of your relationship, and your emotional health, and your physical health. 

Cristina Amigoni: That makes a lot of sense. That's what I would have guessed without doing the study. 

Alex Cullimore: We can all feel that one internally. 

Chuck DeVries: Yeah. In the study, one of the things that they talk about actually is it it has an outsized effect when it is somebody who is core to your relationship, but they don't have to be in that core set. It can be adjacent or a kind of outside of your peripheral tie sort of a thing. But if it's persistent enough and it's a one-way sort of thing, it doesn't need to be central. It doesn't need to be intimate. It still will have a chronic stress condition where you can see that from a team's point of view. 

To bring that back to the leadership point of view, we've probably all been in situations where there's somebody who is a couple levels above or couple levels off. Maybe they're a customer, maybe they're a consumer of something that you're doing, you're like, "That person is a jerk." And everybody just gets anxiety when they think they might have to deal with them or the ramifications of waves around that person. And so knowing that it actually truly is not just a psychological but a physiological effect should hopefully push people to more action. 

Cristina Amigoni: Yeah. What is the action that people can take? Because as you were explaining the removal piece, and even if you're not dealing with the negative person, but knowing about the negative person gives you anxiety, and ages you, and all the DNA consequences, and then the nodes to whoever you're dealing with. But I wonder how removing yourself from that situation, how even thinking about that person, that relationship years later, how much that sticks around. 

Chuck DeVries: It certainly does, right? I mean, they don't really go into that from like a time duration or dilation point of view in the paper. But for sure, that's true, right? I mean, I'm sure all three of us could come up with it. Let's think about the person who has caused us the most grief in our life, and then suddenly you feel a queasiness in your stomach. You know, right? Okay, that's certainly something. That trauma is very deep. And I'd prefer to push it back down. 

But to where you started to go, I think part of that question, too, though, is what could you do? Because nobody wants to get older faster. I mean, I assume when you're really young, you do. But once you reach a certain point, you want to go the other direction. You prefer younger. But it is like many other things. Being conscious of it and knowing that this is what happens when I think about this, or this is what happens when I think about these people, allows you to be able to be better prepared for it. So you can create a conscious set of boundaries. You can create a conscious set of conflict resolution conditions or reduction conditions. You can help to restructure and reshape. 

And sometimes you can't get Uncle Joe to not be a racist jerk, whatever it is. But you don't need to necessarily go visit him all the time, or talk about social media, or, god forbid, politics, because you know that's going to just be a stress for absolutely everybody. And same sort of thing in a work condition. What can you do to make it so that you've got an agenda when you're going in and you're talking to whomever? What can you do to make sure that you've got that thing funneled on point? Who do you know who are the natural peacemakers? There's always somebody who is a helpful person to suage and something along those lines. 

Cristina Amigoni: Makes a lot of sense. Yes, we have a better understanding of the impact on the individuals DNA and even teams. What happens to the DNA of an organization and the results of that? 

Chuck DeVries: Obviously, they didn't study that part. They were studying the humans. But I would imagine, it's very directly relational. It's a similar sort of companies figure out over time. Either you're purposeful about your culture and your interconnectedness of people, or it happens anyway and you get one. And when you have – just to take the biological analogy even further, right? If you have a cancer in your organization, in your body, it's the same sort of thing. It causes distress. It causes an undue level of impact upon the rest of the organization, upon the rest of the organism. 

Even by itself, cancer is terrible and not something that we would want anybody to have. But from the cancer's point of view, it's just trying to grow. It's trying to do its job, which is to propagate. And under different conditions, that is exactly what you want. So similar sorts of things, somebody in a corporation might feel like they're doing exactly what you want. You've incented them to sales at any other at any cost, or build this piece and get this done within this period of time, no matter what the people ramifications are. And understanding there are additional effects to that that you got to get treatment. You're going to have to do something about it. You're going to have to take care. You're going to have to take aftercare. 

So even after, let's say that person who is the hassler is removed from that situation, it's still a post-traumatic event that you have to provide some psychological help to get people through. 

Alex Cullimore: That's a good way of putting it. There's a lot of proactivity that is needed for that reason when you know that there's a kind of force multiplier there of a negative impact. Like you said, it's addition by subtraction. But since it's actually a multiplier, it's like division by subtraction, or multiplication by subtraction. It's interesting. But it's one thing to keep in mind in both your personal life as well as definitely on teams. Once you have that negative influence, You can either let it spread and then it starts to get a little more contagious, or it could go to other teams. Or you can try and cut it out, or you try and resolve it some way or another. 

Chuck DeVries: Yeah, my earlier comment on it's like graph math, where you've got positives and negatives, and you're trying to balance those things out to be able to do a grid- based mathematical problem. I want the total to be good, right? I mean, there's other studies, obviously not this one. But one of the very often quoted studies is around the five people you spend the most time with, you are most likely to be like them. Whether it's compensation, or whether it's output, or whether it's even an effect. That has been shown now recently to affect your DNA, your health, your gut health. People are fascinating machines. 

Cristina Amigoni: Yeah, they have a lot of influence. 

Alex Cullimore: Yeah. 

Chuck DeVries: More than we might wish them to be, right? 

Cristina Amigoni: Yeah. It's crazy. 

Alex Cullimore: Yeah. That's the flip side of being a social animal. That's what it means. We are connected to the other ones around us. We are endlessly influenced. 

Chuck DeVries: The advent of fire allowed us to get a bigger brain to get closer together, and then build all those different social norms that reinforced over time. And for good or ill, we are social creatures. 

One of the bits that I think is an interesting conclusion out of the study is a lot of times we think solitude is the worst. And not that solitude is great, right? We are social creatures and having some level of interaction. But it turns out that that's a neutral compared to a hassler interface. 

And so if you, again, take that network math point of view, you got positives, you've got some neutrals, and you've got negatives. You are much better being able to cut off that negative piece. And so having some degree of solitude is actually perhaps better than being in an actively aggressive environment. 

Cristina Amigoni: I would confirm that even without the study. Definitely. It does feel better. Makes you sleep better. Digestion is better. 

Chuck DeVries: It does. It does. 

Cristina Amigoni: Showing up for other people is better. 

Chuck DeVries: It's one of those things though, not to minimize that, it is difficult to leave a situation like that. Because even if you're – I mean, there's numerous studies around that too, whether it's physical abuse or other different things. It's difficult to leave because your cost of change, you get overwhelmed with the "but then what?" And what are the other things? What's next. But again, knowing the math, maybe that helps a little bit to be able to say, "Here's what the impact actually is and what the ramifications of that are." Eventually, that's what tips the math. 

Cristina Amigoni: Yeah. Solitude socially and just even internally feels so much worse that choosing solitude sometimes, a lot of times, is that overwhelming. Like, "I can't leave because I'm going to be alone." And what happens if I'm alone? Who's going to protect me from the saber-tooth tigers?" Which sometimes we didn't realize they were in the house or in the work. We're not outside. 

Chuck DeVries: Right? That is the one that you're worried about. Yeah. No, that's a very good point. 

Cristina Amigoni: Yeah. Then we're constantly in survival mode where we think we're safe, but we're not. 

Chuck DeVries: And that's part of what causes the aging from a DNA point of view, right? You're in hyper alert mode all the time. It's your chemicals, which are not balanced the way that they should be or the way that you want them to be. And so therefore, you get additional cost. 

Cristina Amigoni: Indeed. Yeah. So, we're going to disappoint a lot of people if we don't have you talk about AI, because that is the only reason why they've waited 23 minutes listening to this podcast with your name attached to it. So, let's bring that in into the equation. How does that influence things? 

Chuck DeVries: We'll start with this, and then I'm sure we'll wander off, or I'll wander off. Let's be honest. If you think about what this study shows and you think about the social dynamics and how those pieces work, we already see things with social networks and the esterification of social networks. And the algorithm does what the algorithm is intended to do, which is to get clicks. And the algorithms are really efficient and doing very smart. It's not actually smart. We may feel like, "Okay, well it's really smart. It's able to get me to do something." But no, it's not that smart. It's actually just really good at saying, "Okay, this is what triggers it." 

And as we think about AI and how we want AI to take that next set of steps, this is why transparency or explainability really has got to be a core thing that we are building in to be able to take it to as far as we want to be able to go. Because otherwise, you see some of the negative chaining effects happen out of that, where it could get significantly worse. 

Recent events with the AI Anthropic and the government kind of going head-to-head. It's a fantastic exercise in bullying, I think, to a certain amount. But to their point is, hey, the technology is not mature enough. Right? If you actually listen to what Dario has talked about in more than one location, it's not that they're trying to prevent the US from having fully autonomous weapons. They're saying that tech is not ready to have a fully autonomous no-human-in-the-loop weapon. And that really is a frightening thing by itself. Forget the fact that you could survey all Americans. And the Congress should get involved here. If the defense on that is, well, it's not illegal. Okay, let's get that fixed then. Right? 

Cristina Amigoni: Yes. Yes. Should be. 

Alex Cullimore: If only there were some lawmakers in the room. 

Chuck DeVries: Right. Right. For anybody who's in that business. Personally, I firmly believe that there is a path to goodness from AI on all those different pieces and where it can actually help you. There's already been different cases where AI is actually able to be utilized with some amount of psychological here. Now, there's some danger inside of there too from like an AI-induced psychosis and things like that. I hear the voices, and now the voices are actually telling me these things are the same things. 

So, we got to watch for all of that stuff. It is neither all powerful or all-knowing really. It is an intelligence that we have created. And now we've got to figure out how do we make it as practically useful as possible. 

Cristina Amigoni: Yes. 

Chuck DeVries: Pause for a minute, and then I'll talk about the dog that codes. 

Cristina Amigoni: Yes, we got to talk about dog that codes. Yes. 

Alex Cullimore: I think there's some similarity in the study though as well, at least as far as reactions to AI. Because then the second, there's a negative experience with AI or one that makes the headlines. Even if you just hear about it, you didn't have it yourself, that tends to have like a larger lasting effect because our brains are just ready to receive and watch for more negative things than it is to find anything positive. So, it's good to hear there are positive ideas on AI. But it's also understandable that people have such a fear of it, especially with how it has been used and attempted to use to kind of replace people or deliver results that don't work in better light. 

Chuck DeVries: And that's not a new problem for technology. There are many factors that are coming together that are causing this particular one to have even more increased sensitivity than usual. And now I have a couple different reasons why I think that is true. But when computers first came in and Excel was first created, they're like, "Well, we're not going to need finance people because we'll be able to just do all this stuff in a spreadsheet." That has not been my experience. I don't know about you guys. All right. They they became much more capable. And the finance people love Excel, right? They're wakadoo. I'm personally not a big spreadsheet fan in that particular use case, but I completely get it. 

Coding by itself is actually a profession, which is only like 70, 80 years old total. It has not really been around for a huge amount of time. But the biggest things that are causing the distress is it has the potential to violate the social contract of, "Hey, I put all this effort in. I got this education. I was promised a good life, a good job, a good whatever. And now a computer could do that and can do it for cheaper to be able to do that. Now, what about me?" 

And that's different than some of the previous things where you've seen, okay, electricity came in, and we were able to automate physical labor. And so, okay, those physical laborers upskilled, and then they became the people who managed the machines that did that work or managed to make sure that the output was correct. And actually, there might have been fewer of those jobs. They were higher paying. And so, therefore, we did more. And we continued to scale those things out. 

And I think with AI, we don't yet know what those new things are that we're going to actually need, which is part of what makes it really scary. You're hearing 100% that it's going to replace me. Now, I will tell you that is not where we are. So, Dario – I feel I'm like a Dario spokesperson today. That is not normally my case. But he and a couple of others have talked about they're more on the cautious path side of things of, "Hey, this thing is about to happen." 

I think the last time that we talked, we talked about my P Doom. You know, your probability of doom. If your probability of doom is higher than 1%, you probably ought to do something about it, honestly. And if it's higher than that, you definitely ought to do something about it. 

My rule of thumb for people is what probability that an airplane is going to crash every time that you're on one is going to make you not go on airplanes anymore, right? If it's 1%, you're like, "Okay, out of one out of a 100 times, I feel like I'm okay." I don't personally, but maybe you do. That's fine. 

Cristina Amigoni: A little less would be good. 

Chuck DeVries: Right? But I want that to be a really low number. I would do something about that. But even if it's like five or whatever, then you should chase after that. Dario's estimate is it's 20%. Right? There's others who they're thinking lower from like unemployment. Could cause that much unemployment. 

Now that is not the Great Depression level of unemployment. It is Great Recession level of unemployment, and it will cause social unrest if that actually happens. And so we as a society need to figure out what are some of the things that we want to be able to put in place. And how are we going to be able to take those things to that next level so that it is a human-based roll out. 

And I think there's a number of different ways to be able to do that. And I think there's actually a number of different opportunities that it's going to unlock both for individuals and for companies to be able to do way more than they could before that it also doesn't mean that there's going to be zero disruption to be able to do it. 

Cristina Amigoni: That's part of my frustration or at least my soapbox, whichever it is. My challenge is, as much I use AI in all sorts of places, even places that I don't know I'm using AI, because it's everywhere, but it's what's the point of doing all of this to leave humans out of companies and out of all of it? Aren't we doing this for humans? And also, if the humans are out, who's buying your products and services? 

Chuck DeVries: Yeah. 

Cristina Amigoni: It doesn't feel like it's a smart economic decision besides being not a very good human decision. 

Chuck DeVries: Yeah, there was a positioning paper or a research paper that was written by – starts with a C, but I can't remember the exact company name at this point, right at this moment. And it basically laid out exactly what you're talking about, right? Is that, "Okay, great. Companies start to see more profits. They're replacing people with AI. They release more people. They do more stuff with AI. But all the people who had those jobs no longer have those jobs and no longer have that disposable income." Therefore, you see some sets of spiral, right? 

They actually paint here's the positive picture and then here's the very negative picture. And I've had to have therapy sessions with several of my friends around that particular one. Is that possible? Sure. It is up to us to make sure that is not the future that plays out. The ability to do more – this is sort of the same, should companies maximize profit or maximize good? And there are now multiple different types of corporations. You can actually have a for-public benefit corporation. And part of what they need to be doing is actually contributing to the overall public good. 

I think there's other things. One of the big topics that people talk about is, "Oh, go to a universal basic income." Okay, great. That's interesting. But I don't personally think that that's sufficient. If you run into that problem where you have mass unemployment, being able to actually play some level of unemployment, that is effectively what that is answering. That's a band-aid, not a solution. 

And while that's great, it may it may be an important part of overall policy, it is not the answer. There's also talk about a universal high income of we can just – we'll print money. It'll be fine. We've printed money in the past. It's not a scalable answer either. 

Alex Cullimore: Works out every time in the past. 

Chuck DeVries: Yeah. It would tell that, "Okay, you got more money, you have more supply, you get increased demand to that. That's how that's how math works, right?" It's not going to actually solve that problem. So, you got to come up with other things in other venues. And I think we are hitting an inflection point where there is going to be a number of other different opportunities for humanity and trying to figure out what still needs to be done, what has value, and what creates value for other humans is a fundamental thing for that. And I thought you said it just fine, by the way. 

We do all these tech things. We build these corporations not to serve some headless money master. We do them to serve a benefit for our customers. And this is sort of the same thing that we see. So you see companies scale. As they scale, they add more levels of bureaucracy to be able to increase consistency. And then once you start to get to a certain point of the bureaucracy and consistency, you get to sub-partitioning, because now we're going to have people who are focused on just the finance piece, people who are just focused on the tech piece, and people who are just focused on the design piece. And that's perfectly normal, right? That is how companies scale now. ut what ends up happening is that you start to think that that is what the company does to create value. And it's not. It is what value you are creating for your customer, that is the end thing. 

And so to me, one of the big disruptors that's coming is that I think we're going to be able to do with less bureaucracy for consistency because you can give the operating parameters to the AI to be able to do those things. But it doesn't mean that you don't need somebody who understands coding specifically around, "How are we going to scale this? What's the interactions between the various systems?" You need somebody who understands design and to be able to say what is going to be our brand. Can they help you? For sure. 

I mean, it's a very intelligent bot. It can do all those things. Can it do finance? Yes. But how are we going to put those pieces together? That does mean you'll end up being able to do more and do it with fewer people. But fundamentally, your customer is still your customer. So, my hope, and that is the anchor that we need to get people to start to really think about. That is then the transformation that happens is if you are focused on your customer, you get closer to your customer, you remove some of those layers of bureaucracy that have given you consistency to get closer to your customer, you actually will be able to still be consistent and go faster and do more for those customers. I will step down off of my little soapbox at this point. 

Cristina Amigoni: No, we need more of that. Don't step down. 

Chuck DeVries: Preach.

Alex Cullimore: That's exactly, I think, my hope for AI. And what I think will eventually start to settle out is it's a tool that will turn some things to hyper-speed and will be able to do lots of things for us. And it won't be able to do all the things. And I like your example of like the machines and physical labor. It's a good example of, "Hey, this replaced it. People had to upskill." 

I am more and more concerned about the lack of caring about that transition point because there will be a lot of disruption. And it's not that you can totally mitigate that or that you should try and do nothing because there's going to be some disruption. There's good things that will come out of some of these things. But there's so much more diligence needed to focus on what that good will be and making sure that disruption isn't a mass of unemployment events. And so that there is the ability to do those. 

And I do agree that there's the possibility. And I like hearing about those possibilities. because I like finding the policies that should be implemented. I like that idea of focusing on the customer because it's not focusing on the shareholder. Because AI can do a lot of little pump and dump, get the one quarter done feel. I like that focus. I think that's a great mindset. And I would love to see more people endorsing that kind of thinking and that kind of wider what are we doing to actually create value, not just create next month's revenue. 

Chuck DeVries: Yeah, I think this is just an interesting mental exercise. I don't have an answer. But my interesting mental exercise on this is what was the conversation when they did discover fire? Because if you think about when fire was – and I know. How does Chuck's head work? How does he go from – I don't know. 

If you think about that as a level of analog impact. So up until that time, tribal, moving around, small groups being able to – what you could harvest. What you could – if you hunted, you had to eat it immediately. Or if you gathered something, it was you didn't over gather. I've long argued that wheat domesticated us and corn domesticated us, not the other way around, because they are really successful right now. They're everywhere. 

But when we discovered fire as a sustainable thing and being able to cook, a number of things happened. Our brains got bigger. The prefrontal cortex got bigger. The relationship centers of our brain began to expand. And you started to be able to see more settlements because you could do more. That's part of what unlocked not just the thinking and the capability to be able to do organized sets of hunting and and whatnot, but then it unlocked farming, and that unlocked a number of other things. 

Now, every one of those had another set of secondary effects that weren't foreseen at the time. And some were positive and some were not. You look at the industrial revolution, and we look back and, "Ah, that was great. People had lots of opportunity." Yeah, but there was also just an incredible increase in number of poor and sickness. And, hey, hospitals were invented because they needed them. Understanding some of that. 

And this is one of those places where some amount of forethought and design in how are we going to manifest some of these things really will go a long way to make sure the secondary effects are what you want, or at least channel into different shunts to let off the pressure. What do you think the conversation was like around the fire when they first got it? Well, this means we're not going to travel as much anymore. 

Cristina Amigoni: Yes. 

Chuck DeVries: There goes our way of life. 

Cristina Amigoni: Yes, there goes my dream vacation to Fiji. 

Chuck DeVries: Exactly. 

Alex Cullimore: If I go to Fiji, we can bring the warmth here. 

Chuck DeVries: I don't think that was the conversation before the invention of fire. Yeah. 

Cristina Amigoni: Yeah, probably not. That is interesting. 

Alex Cullimore: They were all texting each other, "God, I just got fire. Crazy."

Cristina Amigoni: Keep putting my hand on it and I keep getting burnt. Where's the hospital? 

Alex Cullimore: I was wondering what this emoji meant. 

Chuck DeVries: Before that conversation, it was grandma has lived forever now, and she's starting to become a burden. You do something about it. I don't want to do something about it. 

Cristina Amigoni: Yes. Here comes fire. 

Chuck DeVries: And now we can sit down and it's okay. We have more food than we know what to do with it, right? Hey, that's great. 

Cristina Amigoni: Yeah, for sure. Let's say you've invented fire, but in the sense of you run the company and you have full say on how to integrate AI into a company that is made of humans. How would you do it? 

Chuck DeVries: Yeah. I mean, so it really depends upon what it is that my company is doing, right? So, what is my output? What is my value that we're creating to the rest of society? Let's pick healthcare, right? I love healthcare. 

Cristina Amigoni: Random one. 

Chuck DeVries: Yeah, I come from healthcare. So I spent healthcare and travel for the last two decades. All right, we can use AI to be able to increase in many places the amount of care that people can get. Right? The AI is accelerating discoveries. It is accelerating within the last few months. OpenAI worked with a research education facility, one of the universities. Has a full-blown wet lab, fully automated, actually doing testing at scale 24 hours a day. You couldn't do that before as humans. The number of discoveries are going to continue to ratchet up. 

Demis Hassabis, the CEO of DeepMind for Google's version of AI, was talking about how, within the next three years, it's conceivable that we could have answers for all diseases, which I think is amazing, right? They've now got the ability to fold all the different proteins and be able to look at all those different things. All of that's going to provide different accelerations. And you still need to have an ability to get it to the right people, the right people. And part of the care ecosystem isn't inherently human thing, right? Why would we solve disease if we didn't care about humans? It's probably easier just to put them off to the side somewhere. But that's one of the core purposes. 

And so, you've got the ability to use AI to be able to help with managing what's going on, right? This is one where I think is almost a silly example. But I think it actually is one that could be very useful. You may have elderly patients who you can give them an interaction modality of a small stuffed animal, a little teddy bear, or whatever that has some cameras and some other different miscellaneous sensors. You put your finger in the thing's mouth and it can run a series of tests and let you know how it is that they're doing. You can actually interact. That stuff goes to a doctor. That doctor can actually look at the different pieces. Or goes to an AI that helps to be able to make some of the different calls. And then somebody can go to that person and be able to help to intercede when there's different things that are coming up. 

But again, I think for every industry, you're going to have to go through essentially your end-to-end work plan. What is it that you do? And what creates value? What are the steps? What are the things that create value for the customer? What are the things that you do just because you're trying to measure some level of consistency or whatever? 

And then based upon that work breakdown, you need to say, "Okay, this is something that there is uniquely valuable things out of a human doing it." Whether it's making a judgment call, whether it's creating value, whether it's communicating, connecting with another human, as we started off here. The positive effect of humans and relationships with other humans is outsized, negative, the same thing. 

We want to make sure that those things get kept. And then there's other places where you go, "Okay, we can do automation here with AI." And when you're talking Gen AI, it's really places where you're talking about probabilistic decision-making and logic to be able to caption it, apply an intelligence to it, and get you different answers. Versus automation. Just automation. Automation has existed for a very long time, and we've been able to do RPA, and that did disrupt people and disrupt jobs, and people shifted from one thing to another. 

This one's bigger because it's not just like one thing, right? It's not those are call center jobs. Nobody wants to do that anyway. Well, I mean the people who are working at it want to do it obviously. So I think just, again, running through what it is that that company does. How you touch customers and how you create value? And then what are the things that humans add value and are unique to? And what are the things where, yeah, we can use automation to be able to do that. That's the right way to be able to approach it. And then you are going to change math. I'm sure there will be some jobs that change. And there will be folks that lose jobs or gain other other things. Purposeful is the best thing that we can be. 

Cristina Amigoni: Well, as you said, and you've talked about this often, is looking at it from how do we change the way we work, as opposed to AI is just going to do the work. And that's it. Humans, you're no longer needed. 

Chuck DeVries: It is the ways of working and accelerating the ways of working to create the value at a pace that is accelerated. That's what's going to unlock our ability to travel to the moon, or to Mars, or to other solar systems. And I think there's a whole lot of things that will unlock because we will be able to remove some of that friction and take some of those things. It's previously science fiction, but now very fast becoming science fact. 

Cristina Amigoni: Very good point. We've created new life with fire. 

Chuck DeVries: Yeah. Again, I know. How does Chuck's brain go to these different things? I was looking at some of the work of Von Nuemann. Brilliant mathematician. One of the core folks who was responsible for building the nuclear bomb along with a bunch of other brilliant scientists you might have heard of like Einstein. But he actually, back in the 50s, wrote a paper around his concern about the rate of gain in technological competence and how it would surpass our ability as humans to be able to channel it and control it. And what would we do? 

Now, you got to understand, this is the dude who invented the ENIAC, the original tube computers, and the MANIAC, which was the first set of things, which was where they replaced the vacuum tubes and people actually pulling stuff and plugging things with the ability to actually build programs on those fancy things called cards. And a variety of other things, like the Monte Carlo method, and many other different really advanced topics that have enabled all of the tech that we have been doing and that's exploding right now. But even then, he was concerned about our ability. It's not like we didn't see some of this stuff coming. But it also is a – it's a long form problem. It feels imminent because we're seeing the kick, but there's still a range of time to go. 

Cristina Amigoni: Yeah, it is a long form problem for sure. So, now we do have to ask about the dog that codes. 

Chuck DeVries: Okay, so this was one of my favorite miscellaneous brain candy things of certainly at least the past few weeks. And so, not only did it code, it built the video game. If you want to check it out, it's the calebleak.com is the person whose dog has done the coding, and it's actually even better. 

The way that he discovered this, it's in his blog, it's like post/dog-game, and you can go play it yourself. He actually came back to find – one time he was coding with Claude, and there was a bunch of weird things inside of his entry thing with Claude code. And Claude did its best to interpret what the output should be based upon that very cryptic bunch of things and actually generated some stuff. That just got him thinking about, "Okay, well, what could possibly have happened?" And then he figured it out that it was his dog, either dragging their face, or smacking with a paw, or whatever. 

And so, as only engineers do, his natural conclusion to this was, "I wonder if I could scale this." And so, he got a Raspberry Pi, which is one of those little tiny computers. And he built a Raspberry Pi with a little dog interface. So, if you've seen – and they're all over the – pick your social media thing of the dog going, "Bird, bird, bird, bird." That would be my dog. My dog would go, "Squirrel, squirrel, squirrel, squirrel, squirrel, alarm, squirrel." That would be the equivalent. 

He hooked it up to one of those different pieces to be able to go through, and then actually built it so the dog could see what the video game things were. And so the dog, effectively through the release of treats and the view of what they – the dog coded a video game. And it is actually playable. I'm not saying it's going to disrupt Halo and Xbox, or Sony and PlayStation, or anything like that. But the fact that, one, you could do it, and two, it actually is playable points to just how far along the AIs really are in terms of interpretation and output. 

Cristina Amigoni: That's pretty awesome. 

Chuck DeVries: Right? 

Alex Cullimore: Yeah. I have six coders in my house. I didn't know. 

Cristina Amigoni: Yes. 

Chuck DeVries: Right? Yeah. You got to hook those cats up, man. 

Cristina Amigoni: You could build the next Xbox. I don't know. What are you doing all day? 

Alex Cullimore: Yeah. I had no idea this was a possibility. I have a lot to do this afternoon. 

Chuck DeVries: Yeah. Right? Well, you go to that website. You can probably get the Raspberry Pi stuff. You can get that set up for a couple hundred bucks and away you go. You've got a whole little army of coders. And who knows what you'll code next? 

Cristina Amigoni: Exactly. You can go viral with cat videos and cat coders. 

Chuck DeVries: I suspect both would take off. 

Alex Cullimore: Yeah. They're pretty popular on the internet. 

Cristina Amigoni: Yes. 

Chuck DeVries: I don't know about you, but in my head I have sort of like the exploding kittens sort of color scheme and action. 

Cristina Amigoni: Yeah. Yeah. Well, as always, we've learned a ton. So, thank you for that, Chuck. 

Chuck DeVries: Maybe some of it was useful. 

Cristina Amigoni: I would say all of it is useful. 

Chuck DeVries: That is also always a follow on caveat. It's always entertaining. Sometimes it's useful. 

Cristina Amigoni: I know. And we got our own private, because it's going to go public. Chuck Chat. 

Chuck DeVries: Yeah. There you go. Yeah, private for now. Breaking news here, I guess. I am working on being able to launch a Chuck Chat channel on YouTube. I'm trying to figure out how do I work it into the right schedule and hold up a commitment of actually doing it on a regular basis. It was easier to do, I think, when I'm held accountable to a meeting that's on the calendar. So, I'm going to have to break some of that stuff up and put that on there. 

I don't. The breaking news here with you guys. But it turns out that social media, even LinkedIn is a hole that you can just fall into for hours, and then look up and go, "Where did my day go?" Or you YouTube, or insert name of whatever. It's not a thing that I fall into very often or used to fall into very often, but I found myself falling into it here lately a couple times. 

Cristina Amigoni: It is a hole for sure. Yes. So, that answered the questions of where can people find you? Well, LinkedIn. Highly recommend." 

Chuck DeVries: Yeah, I'm on LinkedIn. 

Cristina Amigoni: Following Chuck. Always wonderful posts. 

Alex Cullimore: And Chuck Chat, for those curious, is a reference to – just so that we can give it context, is a reference to some very popular calls that Chuck had with the company we met him at, which was one of the best attended regular calls I've ever seen, where Chuck would bring some new topic, something to do with either the business, or often a new AI topic, or something that was being tried out that week. And it was always incredibly informative, very well engaged. So, I'm looking forward to seeing how that translates out to the next phase of whatever Chuck Chat looks like. 

Chuck DeVries: News in what's happening. And then brain candy. The brain candy part, I think, I can definitely do because I always wonder as evidenced by today's conversation. Like, "Oh, that's interesting. How far does this rabbit hole go? Oh, very deep." That part I can do. And I keep up with the tech news. It's really the discipline that I'm going to have to actually physically work on is actually just writing down the news pieces to say, "Okay, these are the things, and this is what it actually means," and being able to talk it out.

Alex Cullimore: Have you tried a spreadsheet? 

Chuck DeVries: But I'll you what, I will use AI to bring it together. 

Cristina Amigoni: Yes, exactly. The orange monster is going to have to have a next generation, next life. Maybe different color, but something. 

Chuck DeVries: If you follow me on LinkedIn, you'll get access to the buzzword bingo board that I just released. So now I give you guys a reason to go out there and look at it. 

Cristina Amigoni: Yes, we definitely will. But Chuck Chat has become famous even in my house, where my kids are constantly asking me when I get to attend the next Chuck Chat. 

Chuck DeVries: It's an honor to be beloved by children more so even than adults. 

Cristina Amigoni: Says a lot given their attention where it goes. Yeah, thank you. And we always ask, what's your definition of authenticity? So we want to see if your definition has changed, even though we have no idea what you said the last time, because I don't remember. 

Chuck DeVries: I don't know what I said the last time either. Yeah, we'll have to correlate those things back. My personal definition of authenticity is wherever I go, there I am. As evidenced by my background with minions and toys and whatever, I believe that as a leader, as a team member, your job is to show up and to bring energy. And I think every interaction you have an opportunity to bring energy or take energy. 

And so, showing up with my authentic self is bringing all of that weirdness that I have just demonstrated here for the last however long this has been. But also caring about other folks. Part of my core, and I've had these forever, but part of my core is I want to help those around me to be their best selves. And if I can unlock that, that gives me joy. I want to be creative and drive tech. And as long as I get to do that, I'm happy. 

Cristina Amigoni: Tipping the scales on that negative social interaction. 

Chuck DeVries: Right. 

Cristina Amigoni: Just be around Chuck. 

Chuck DeVries: That's right. I want to be on the plus column of that big particular matrix math problem. 

Cristina Amigoni: Well, thank you, Chuck, as always. 

Chuck DeVries: It has been a pleasure as always. 

Alex Cullimore: And thank you everyone for listening. 

Cristina Amigoni: Thank you. 

Alex Cullimore: Thanks so much for listening to Uncover the Human. We are Siamo. That is the company that sponsors and created this podcast. And if you'd like to reach out to us further, or reach out with any questions, or to be on the podcast, please reach out to podcast@wearesiamo.com. Or you can find us on Instagram. Our handle is wearesiamo. Or you can go to wearesiamo.com and check us out there. Or I suppose, Cristina, you and I have LinkedIn as well. People could find us anywhere. 

Cristina Amigoni: Yes, we do have LinkedIn. Yes. Yeah. And we'd like to thank Abbay Robinson for producing our podcast and making sure that they actually reach all of you, and Rachel Sherwood for the wonderful score. 

Alex Cullimore: Thank you, guys, so much for listening. Tune in next time. 

Cristina Amigoni: Thank you.

[END]

Chuck DeVries Profile Photo

Chief Technology Troublemaker | AI Strategist | Former CTO Building AI-first organizations that unlock human potential.

Chuck DeVries is an AI strategist, systems thinker, and technology builder focused on the intersection of advanced technology and human potential. His work centers on helping organizations become AI-first by designing systems where people and intelligent tools collaborate to do their best work, with an eye toward positive societal outcomes.

Over a 25+ year career spanning startups to Fortune 50 companies, Chuck has built and led large engineering and data organizations and helped move emerging technologies from experimentation into real operating capability. He holds a master's degree in computer science with an emphasis in artificial intelligence from the University of Texas at Arlington, and brings a rare combination of deep technical fluency and human-centered design philosophy to the challenges of AI adoption and organizational transformation.

Beyond his organizational work, Chuck explores the frontiers of AI through ChuckChat, a content series covering AI, health-tech, and emerging research, and through hands-on experimentation in generative AI art and knowledge systems. He is known for translating complex technology into meaningful human impact.