Presentation

Artificial ethics: Consent, privacy, and safety in software

Bridget Kromhout, a principal cloud developer advocate at Microsoft, gives some specific examples of how your day-to-day development has ethical implications, and how it impacts on real human lives.

The decisions of technologists shape the world—whether you notice it or not: Your code constructs reality.

Automation is great, but it also ups the stakes when it comes to ethical responsibility. Whether we're dealing with people's data or building decision-making engines, shirking that responsibility can have dire consequences.

We need to examine what we mean by consent, disclosure, opting in, and data retention. We need to take data privacy and security seriously. And as AI becomes more prevalent, how do we make decisions in its development that will prevent future disasters.

Bridget discusses these ethical questions, focusing on corner cases and defaults that have real impact on human lives.

Speakers

Translate

Hi, thank you all for being here and I know the last talk of the day, even if it's starting a little earlier than previously planned is still a little bit hard to get through, but good news, there is no code on slides. We do have some cat pictures, but no code on slides. I am going to talk about some heavy stuff because I think we have conversations that are maybe a little overdue and maybe we're starting to have in tech. The traditional second slide, who I am. I live in Minnesota. We hold our DevOps Days in July because it is the one month of the year that Minnesota has never logged snow anywhere in its borders.

I come here and people are like, "Oh it's so cold." I'm like, "No, it's definitely not." I do tech advocacy for Microsoft—and more on that later—and I podcast with Arrested DevOps and I am actually the chief cook and bottle washer in charge of the DevOps Day's organization. I think that we have 64 conferences on six continents this year and, fun fact, the DevOps Days Amsterdam is actually this week, so later this week. You can check devopsday.org for conferences near you, maybe start one in your city.

The responsibility of software engineering

It's weird that I was looking at some old textbooks because I got a CS degree in the 90s during the last AI hype cycle actually and I was looking at my Scheme textbook and I said, "Why is there a wizard on the front? It's ridiculous" but then it actually isn't if you think about the fact that we write code that creates reality.

Everyone in this room is probably the closest to a practitioner of magic than actually exists, which is a little bit weird and I don't know about you, but I grow up reading fantasy but also a lot of sci-fi, and I assumed I would live in the sci-fi reality. Well, I think the reality we ended up in is the oppressive cyberpunk dystopia, so we don't live in the Jetsons. We live in Blade Runner or something like it, and what I think that we have to worry about with that is that probably we've all read a lot of fiction, we've seen a lot of movies about this universe that we are finding ourselves in. I think what we need to worry about is that there are a lot of effects on the actual people.

I mean the rest of the people in this world, the ones who can't write the code to create the reality are not NPCs and they are not bit players. They're all the protectionists of their own narrative and we have the weighty responsibility of making a narrative that they might want to actually live in maybe, that they might actually be able to survive in. No pressure, but that's your job. You get to leave here and after a couple of days, a few days of workshops and talks about exciting tech, you get to leave and take that tech and try to make the world slightly less terrible. I'm not going to say make it good because we might be past that, but make it slightly less terrible, right? Let's talk about a few of the areas that I think we can use to do that.

Consent, privacy, and safety

If we can stave off some of the dystopian present, it's not a future anymore, that we're living in, if we can think about how we're connecting to each other as people, how we're making sure that people have the ability to consent to what's happening around them and some amount of privacy even if just in their own minds and actual safety, and that's not just metaphorical unfortunately. First of all with consent, we all want very fine-grained control over our lives, whether it's we're extremely picky about how we can figure our editors or like I don't know about you, but I still use VI for everything. I call it VI and this newfangled Vim is like 'what?' But we all want some control over the world that we live in and a lot of the systems that we're creating, the settings aren't necessarily intuitive.

They don't necessarily default to what someone might actually want if they really understood the settings and we're not necessarily doing that because we're bad people. Maybe it was expedient or maybe it led to the right direction for the sales funnel, but I think we really need to think about that in terms of what kind of defaults are we giving people that they are going to be happy with—that default when they understand it. If you're explaining to your family members what a default actually means and you find yourself saying I'm so sorry, it gets really old apologizing for on behalf of your fellow technologists and the decisions that they're making.

I feel like we should be prioritizing what our users would decide if they could and if they actually understood the stuff that we've made hard to understand. Yeah, just the idea of like informed consent, responsible disclosure, having people explicitly agree to having their data shared or explicitly agree to having their location shared or explicitly agree to whatever it is that might not be great for them or might put them at risk, I think it's really important for us to be at least honest with our end users about what we're doing with their data and what toggles they actually have available to them, what decisions they can make.

I think there's a lot of anti-patterns around consent in tech and there's a lot of dark UX patterns that I think the motivations are mostly driven by ad revenue. More on that later too. I am not entirely guilt-free in that realm, okay, but let's talk about privacy for a minute. I think that this one is interesting because I'm standing here in Europe and GDPR took hold and from the vantage point of the US, the main thing that I'm noticing, well two things. I come to Europe and boy every single website wants me to click a lot of things and in the US, we got an awful lot of emails telling us about accounts that we forgot to close that we then went and closed.

I assumed you've probably got those emails too, but I think that when we're thinking about how we're dealing with user data, I mean of course there are the general principles that are GDPR that hopefully the US will actually start to institute things like that too. There's also that idea of—okay, even if you're going to deactivate someone's account or remove it, how do you go remove them from all the individual roll-ups and how do you go remove them from all the reporting and what about things that are archived and what about your wide column data store with a whole bunch of versions in there?

I mean, it starts getting complex really fast and I think a good way for us to think about it is maybe we should just limit the amount of individually identifiable information we're storing in the first place. How much of that do we really need, how much of it do we actually act on, and how much of it have we created a huge hassle for ourselves in terms of having to clean it up. I would say like if we collect less of it in the first place let alone retain it for analysis, we have a lot less to worry about in terms of our culpability, and this is where I admit to you that when the Cambridge Analytica Facebook data stuff came out, I was not even remotely surprised because I worked at an EdTech firm from 2012 to 2014.

If you're familiar with Facebook stuff at all, you're aware that, oh yeah that was before they changed the terms of service in 2014 and expired everybody's tokens in 2015. Well, almost everyone so that you couldn't get friends likes anymore. What we had was we had people installing our app and when they authorized our app on Facebook in order to like tell their friends about the things they bought at the baby store or whatever, we would harvest all their friends' likes. Their friends didn't opt into this. Their friends didn't even know our app existed and we use this in order to build profiles, work with retailers.

There was all sorts of stuff hopefully not terribly evil, but on the other hand, there's a great deal of not opting in and these people had no idea that their information was being used like this. If you're thinking, "oh but Facebook doesn't do with that exact thing anymore," well hmm, sort of. Today it's very easy to collect a lot of data with very little accountability. Again as technologists, we need to think how are we participating in that because there are people who it's not just which baby products they bought, it's maybe the fact that they're buying them or not buying them at all. There's a lot of things around people's personal lives that all of their online history shows.

I think that whether it's overly broad app permissions or completely intrusive on-site tracking, there's a lot of stuff in there that we need to consider what we're doing. I think it's important for us to focus on like principle of least privilege, like how much of this data do we actually need in the first place, and also I don't know if you've seen some of that stuff that's come out about when you put in the wrong email address to off on a website and it says like that's not your password, and then it says your name or whatever and you're like that's not my name, it just revealed something to me.

I think it's really important for us to think about exactly how we're storing things and how we're showing them in terms of again protecting our end users or people who encounter them, people who know them. The third thing that I want to talk about—and these are all interconnected, these are all related—is security, and that's a very easy word to be like whew security. Security in the abstract, like we can just say we care. It becomes a lot more concrete when say people are being targeted by a government. When you're starting to have those discussions in terms of which organizations is your employer working with. It becomes very relevant immediately and whether people are being targeted metaphorically via profiling or literally via drones or imprisonment, there are human effects to everything that we're working on.

Finding out what's going on inside your organization

I think it's really important and grounding for us to think about that. Whatever we're working on could have super critical implications to how someone's life goes, which is sobering to think about, but we have to think about it. Whether it's the US immigration news from last week or anything else, there's been a lot of this kind of news lately. I think it's important that usually if someone comes to you and says, "create a system for data mining and surveillance" and you can be like "no," but maybe they come to you and say let's just make everything more effective and more efficient and you're like "efficiency is great," and later you're like "I made evil more efficient, that wasn't really what I wanted."

I think again as technologists, these are questions that we have to ask. You can start by asking them inside your organization. There's also a plethora of hashtags and community out there in Twitter of people having these conversations across organizations because we need to be having safeguards to prevent individual people or population groups from being targeted systematically and negatively because of our work like presumably. Again I mentioned AI and you can just like paste some AI on anything, but really this is the problem, right? It would be very simple for us to say, "well I'm not going to make a decision to do a terrible thing yeah, but probably no human is making the individual decision to do a lot of terrible things," instead we're building systems and setting them loose and being like "good luck, have fun, hope you don't turn into the Cylons."

We have to think about that because that's the world that we actually live in and they're probably the scary Cylons that look like us too. They're probably not the toasters. Okay, so I think we actually have to like explicitly ask ourselves what are our ethical lines, what will we work on, what will we not work on. It's probably a good idea to have those conversations before you see your company in the news, just saying. If you can think about where is your line, which people you serve, and the interests that you're serving, and the people who are affected by what you're doing, and where those interactions are, especially the intended versus unintended effects of any automation that you're creating.

That is again the problem with a machine learning system—we can end up with effects that we didn't intend or expect or predict, but we create a system and we create a clockwork and we set it loose, and a lot of stuff happens. I feel like we need to have those conversations so that even if we don't stop it before it happens, we can get in the way of these things.

By the way, this is a picture of the Colossus [14:06]. It was used in code breaking for World War II and it was pretty unambiguous then for us to be like, "tech did great things, tech saved the world," and it's like yes sometimes tech does great things and saves the world, and sometimes maybe less so. I think we need to think about that. We can't just say, "Hey Bletchley Park has museums and tech was great," the end.

That's not the end of the story. That's the beginning of the story and we want to live up to that. I think it's also interesting that I'm giving this talk in Europe where arguably there's been a lot more work on and thought about these questions of privacy and security and consent and that sort of thing because I don't work for a Bay Area company and I don't live in the Bay Area, but I do think that the conversations around tech in the US can often feel very localized to that area. I'm going to tell you if you've ever flown to San Francisco and you've looked out your airplane window, you'll see this stuff, this colorful stuff in the water and you're not even really sure what it is. Maybe you care enough to google it and find out.

It's actually the Salt Flats and it's because of not really caring about the environment for quite a while, a major corporation was basically just mining a lot of salt out of the bay there. Because they're now trying to renew the ecosystem there, these salt flats are going to be gone in probably 30 years. It will be not anywhere near as colorful when we're flying into the Bay Area, but I bring that up because it's very easy for us to look at a context in a specific moment and say this is good or this is not good, but everything is a continuum. Everything is going to change over time. Even if we look at something one way, it's going to look very different in just a few years and if you don't believe me, just think about all of the stuff that you're doing with your laptops and your phones and think about what you were doing a decade ago.

Yeah, you were doing a bunch of this stuff, but it wasn't the same and a decade from now, it's probably not going to be the same and I hope it's not a much, much worse more oppressive surveillance society, but I'm a little concerned that we're going that way. If we talk about the ethics of our community at large versus localized to specific areas, if you're going to be in the UK in a few weeks, take a look at coedethics.org. There's going to be a very good conference curated by Gareth Rushgrove and Anne Currie about this topic in London. If you're in the US. the Bay Area YIMBY community, they're like "yes in my backyard."

There's a lot of people from tech who are working on making sure that we're turning whatever powers we have towards good things, towards social justice because I think that it's important to realize that even those of us who may have been like, "I work at a giant corporation and there are probably parts of it that I'm not going to like, but the parts I'm in are good," and then we realize there are parts that I don't like that I need to have a conversation with several layers above my management chain because I'm not happy with specific things. This was my week last week.

You can imagine this talk is a little bit different this week than it would have been a couple weeks ago, but I feel like I'm not the only person, you're not the only person who's thinking to yourself what is my organization doing, what do we stand for, what do we care about, and what are we going to do. If you're not asking yourself those questions right now, you probably should be and hey if you work at an organization that is just you, then probably it's a 100% aligned with everything you want to do probably. I have bad news for you if you're at a company of two or more people, that not everything will be exactly what you want.

Sorry not sorry, that is reality, but that does mean that these are conversations to have inside your organization of whatever size because it is important to realize that we have super powers and we can use them. This is a question of how you're going to use them. If you can't read it by the way, this is a picture from Bletchley Park [18:30] and for whatever reason, it seems like they felt the need to have directions on the telephone so it says, "to call, lift telephone and listen, replace telephone only when finished." If that seems obvious to you, just think like everything once you've done it seems easy.

We saw lots of great talks today and some of them had really elegant technical solutions and you're like yeah but you know what, you know as well as I do that when we're starting to create these solutions, it's not like—yeah this is perfect and obvious and easy. It's never perfect and obvious and easy. It's something that we figure out usually by doing a bunch of things wrong, and so that gives me hope that yeah I think that in the US right now and in Australia and a bunch of places, we're doing some things wrong in terms of watching out for the most vulnerable users in our populations of users of our tech. I think that we can do better by having these conversations by taking whether it's direct action and think about it this way.

How to make things better

There are a lot of places where no one is going to argue with you if you make a coding choice that preserves people's privacy better. No one's going to say to you why did you do that, that's a terrible idea. If you can defend it technically, hey look it's way easier to maintain and it's way less complicated, that's great. In terms of indirect action too, having those conversations inside your organization, inside your communities of practice. There's not like a boolean between don't care and riot in the streets. There's a lot of things that we can do that I would encourage us to.

Very specifically tactically though as I was mentioning, just ask yourself every time you're going to log or collect a report on something, do we need to, can we do this in aggregate, can we do this in a way that anonymized it, that de-identifies it, that doesn't open people up later to being targeted for it. This is something that a lot of us in this room are writing the code that figures out how to log something, so I challenge you to all think about that. Can you log it in a way that will do less harm and be less complex and less to maintain?

The tl;dr on this basically is, I think that we have the technical ability and the human responsibility to focus on this in terms of making sure people are consenting. People talk about consent in a lot of contexts, but I'm talking specifically about everything you click "okay, agree" on that you don't actually read, and you're a professional and you know that you probably should read it and there's probably something in term 42A that you're not going to like and you don't read it. Come on, don't tell me you read it. I know you don't because I don't either, but think about what's in there and what people are agreeing to.

Make sure that they know about anything that you think could harm them and for privacy, worry about leaking details between abstraction layers, worry about like having the principle of least privilege to make sure that you're not putting data out there that doesn't need to be out there. Then for safety, just remember there are humans in the mix. They're being affected by everything that you write, by everything that you commit, everything you push to production. There's probably a person who's going to interact with it and I think it's important for us to remember that because I'm going to tell you I got a computer science degree because I didn't want to talk to people. Turns out spoiler alert.

Soylent Green and tech are made of humans and you are going to talk to people and even if you're not talking to them directly, you are shaping the world that they live in. That's basically it. It's that tech is made of people and adorable cats. We got to have the adorable tech kitten in there, but tech is made out people and I think we focus a lot, we do. I mean, in this conference and tomorrow I'm giving a 3-hour workshop on a container orchestrator at DevOps Days. Of course we focus on the tech, that's interesting and it's something that we like. I think it's really important for us to also focus on the people because the tech is not much without the people.

I can assure you that while my cat very much enjoys sitting on that laptop because warm, he's not actually going to produce any production code, whatsoever. I maybe attest to this, this is not happening. Yeah, that's pretty much that. I think that we do probably live in the darkest timeline. I am pretty sure we live in the darkest timeline, but until that glorious day when the walls of the multiverse collapse and we can choose to live in the reality where the US president is a woman and she's super progressive and she and Angela Merkel are meeting about politics, yes I'm imagining that future. That's a pretty exciting future. We don't live in that future right now.

Right now the reality that we live in is one that we're stuck in and I want us to use our tech superpowers to take care of each other in this world. That's it, thanks.

More resources like this one

  • 4/11/2024
  • FAQ

Introduction to HashiCorp Vault

Vault identity diagram
  • 12/28/2023
  • FAQ

Why should we use identity-based or "identity-first" security as we adopt cloud infrastructure?

  • 3/28/2023
  • Presentation

Hidden Hazards: Unique Burnout Risks in Tech

  • 3/28/2023
  • Presentation

Vault and Boundary - Managing Secrets at Home