FACEIT with Maria Laura Scuri

Happy Halloween! Today, Jon Foust and Brian Dorsey chat with Maria Laura Scuri of FACEIT about ways they are reducing toxicity in gaming. FACEIT is a competitive gaming platform that helps connect gamers and game competition and tournament organizers. In order to do this well, FACEIT has put a lot of energy into finding ways to keep the experience positive for everyone.

Because gaming toxicity can involve anything from verbal jabs to throwing a game, FACEIT uses a combination of data collecting programs and input from players to help identify toxic behavior. In identifying this behavior, FACEIT has to consider not only the literal words spoken or actions made, but the context around them. Is that player being rude to strangers or is he egging on a friend? The answer to this question could change the behavior from unacceptable to friendly banter. Using their own machine learning model, interactions are then given a score to determine how toxic the player was in that match.

The toxicity scores along with their program, Minerva, determine if any bans should be put on a player. FACEIT focuses on punishing player behavior, rather than the player themselves, in an effort to help players learn from the experience and change the way they interact with others in the future.

Maria’s advice to other companies looking to help reduce toxicity on their platforms is to know the context of the toxic event. Know how toxicity can express itself on your platform and find ways to deal with all of them. She also suggests tackling the issues of toxicity in small portions and celebrating the small wins! Her final piece of advice is to focus on criticizing the behavior of the user rather than attacking them personally.

Maria Laura Scuri

Maria is the Director of Business Intelligence at FACEIT, the leading competitive platform for online multiplayer games with over 15 million users. She joined FACEIT as part of the core team in 2013 as an intern assisting with everything from customer support to event management. Her passion for data, machine learning, and artificial intelligence saw her quickly rise through the ranks to her current position, leading the Business Intelligence and Data Science teams.

Maria works side by side with some of the biggest tech companies in the world including Google Cloud. She is the main lead on a number of projects including the inception of an Artificial Intelligence Admin to fight toxicity on the platform. Maria is responsible for implementing best practices around data visualization and tools that allow the FACEIT team to thrive, as well as sourcing and training new talent.

Maria is a huge video games fan. You can find her on League of Legends as “FACEIT Lulu” and on Steam as “Sephariel”.

Cool things of the week
  • What can Google Cloud do for you? New trainings for business professionals blog
  • Leave no database behind with Cloud SQL for SQL Server blog
  • How to orchestrate Cloud Dataprep jobs using Cloud Composer blog
  • Updates make Cloud AI platform faster and more flexible blog
  • Use GKE usage metering to combat over-provisioning blog
Interview
  • FACEIT site
  • FACEIT blog
  • FACEIT on Medium site
  • Steam site
  • Perspective API site
  • BigQuery site
  • Looker site
  • Cloud Datalab site
  • Jupyter Notebook site
  • Cloud AI Platform site
  • TensorFlow site
  • Google Cloud Data Labeling site
  • Google Translation site]
  • Dealing with CS:GO Free to Play and Addressing Toxicity in Matches blog
  • Revealing Minerva and addressing toxicity and abusive behaviour in matches blog
  • One of Europe’s Largest Gaming Platforms is Tackling Toxicity with Machine Learning blog
  • FACEIT And Google Partner To Use AI To Tackle In Game Toxicity article
  • FACEIT implement Minerva, an AI to punish toxicity in CSGO blog
  • FACEIT Takes On Toxicity With Machine Learning article
  • Exploring Cyberbullying and Other Toxic Behavior in Team Competition Online Games whitepaper
  • Toxic Behavior in Online Games whitepaper
  • A Look at Gaming Culture and Gaming Related Problems: From a Gamer’s Perspective whitepaper
  • An Analysis of (Bad) Behavior in Online Video Games whitepaper
  • Toxicity detection in multiplayer online games whitepaper
  • Jon’s gaming info steam
    • BattleNet: Syntax#11906
Question of the week

When I SSH into my VM via different methods (Cloud Console, GCloud, terminal/command prompt) I get a different username… What can I do to make that static?

Where can you find us next?

FACEIT will be at Next London and GDC

Brian will be at Super Computing in Denver.

Jon will be at AnimeNYC, Kubecon in November and Google Kirkland and Montreal in December.

[MUSIC PLAYING] JON: Hi, and welcome to episode 201 of the weekly Google Cloud Platform podcast. I'm Jon Foust, and I'm here with my teammate, Brian Dorsey. Hey, Brian. How's it going?

BRIAN: Good. Hello, hello, everyone. Looking forward to a good show.

JON: This episode, we're actually joined by Maria Laura Scuri from FACEIT. She's a director of business intelligence. And they're doing a lot of work in reducing toxicity in texts and voice chat for games. And you know I love games, Brian, so it's definitely going to be very interesting.

BRIAN: That's awesome, and I'm really looking forward to it. That's an incredibly important area to make things better online.

JON: Very excited to actually have Maria because it's not very often that we have many women who are in the gaming realm. I'm particularly excited to have Maria on this episode.

BRIAN: Awesome. Looking forward to it.

JON: Also, we're going to get into our question of the week. Brian's going to dive in on this. It's when I SSH into my VM via different methods, Cloud Console, G Cloud, any other method that you would like to use, I get a different user name each time. What can I do to make that static?

BRIAN: Ooh, SSH and Google Cloud and VMs. I'm excited.

JON: But before that, let's get into our cool things of the week.

[MUSIC PLAYING]

BRIAN: Got two weeks' worth, so it was a hard thing to choose. There's lots of extras. A few highlights-- one thing that jumped out at me is we've got a new series of trainings, specifically for business people who want to kind of go deep on the business ends of Google Cloud. So most of our trainings have been for hands-on technical folks, and this is geared more towards kind of making sense of the business decisions and how you balance trade-offs there. So excited about that.

There's three of them to start-- Business Transformation with Google Cloud, Machine Learning for Business Professionals, and Leading Change in the Cloud Era. So that'll be good to check out. If that's you or if there is a kind of a businessy person on your team who you'd like to help get a little more bootstrapped to Google Cloud, send them this way.

JON: So one of my cool things of the week is the addition of Microsoft SQL Server to our cloud SQL portfolio. And this is great because there are a lot of benefits that come along with it, and especially with a lot of our users who are more accustomed to using Microsoft SQL Server. If you're used to it, the great thing is that we're offering multiple editions with the current version of SQL Server, which is great. And you can also use popular clients to manage that using, such as SQL Server Management Studio.

And the other key benefits include flexible backups, scalability, and high-end availability. This would be great. So there's a lot of [INAUDIBLE] fall over, scaling whenever you reach your limit, and daily automatic backups, or you can run them on demand.

BRIAN: That's awesome. Me too. I know the people who use a lot of SQL Server, it has so many features, you use a lot of them. So you need SQL Server. And it's great that that's available in a nice, managed fashion now.

So next up, kind of continuing the data theme, is post recently on how to orchestrate the data preparation side of things, like ETL kind of perspective for getting data into databases or preparing it for machine learning, AI back ends. And so there's this article, "How to Orchestrate Cloud Dataprep Jobs Using Cloud Composer," which I just wanted to highlight. And Dataprep is an integrated version of this tool called Wrangler from Trifacta, and it helps both visually interactive and run batch jobs around the full set of data preparation stuff you'd want to do, and see anomalies and all kinds of good stuff. And then Cloud Composer managed Apache Airflow, which is a workflow management tool, which is a little abstract, but I think about it kind of like executable flow charts, when you draw those whiteboard diagrams about if this, then do that.

So you can actually make code that makes those decisions and executes them, like running subprocesses or calling APIs and things like that. And this is a really great example of setting up a full clean preparation pipeline using those tools together. So check that out.

JON: Yeah, that actually sounds pretty amazing. I'm probably going to check that out myself. So following on the AI trend, our AI Platform, Prediction, allows data scientists that serve models for online predictions and a service environment. So we're announcing a couple of updates to that.

You can now choose from a set of compute engine machine types to run your model. And you can add GPUs, such as the inference optimized low latency and video T4. So it's actually very powerful GPU. You can actually add that to your actual-- run your model. And then the other features that we include are custom containers and compute engine machine types for training. So you can choose any combination of CPUs, ram, and accelerators. And then for our containers, they become very useful because you can work with any model framework or dependency. If you're interested in that, please, check it out.

BRIAN: That sounds great. Skimming the article, it looks basically having a new back end on Kubernetes Engine is what allows this. So that's a great segue into the GKE team Kubernetes Engine team has added detailed usage metering as a feature. And that's just gone GA, Generally Available, so all supported and ready to use. And what that gives you is monitoring down at the pod level and then aggregating that up to the name spaces in Kubernetes, or to labels that you give on the pods, so that you can see exactly how much usage is happening from each of the teams that are working on things, or different programs, or any other dimension that you care about.

And the reason I'm excited about this and wanted to highlight it is that at some level, getting high utilization, like lots of usage out of the computers you buy or rent, is kind of the point of containers and getting that without having the programs conflict with each other. So you really want to push the high utilization to get your money's worth out of these systems. And this helps you see where that's happening and where it's not.

JON: So that wraps it up for our cool things of the week. And we're pretty excited to get to our guest Maria. So let's go talk to Maria.

[MUSIC PLAYING]

Hi. And on this episode we're joined by Maria Laura Scuri from FACEIT. So, Maria, pleasure to have you. Would you like to tell us who you are and what you do?

MARIA: Sure. Well, thanks a lot of for having me today. It's great to be here. So yeah. So Jon was saying I work at FACEIT. And I'm the director of business intelligence, which I think you can summarize that into two main things. On one side, we try to do a lot of machine learning and artificial intelligence on many topics.

And then on the other side, what we really want is to drive the culture inside our business, inside our company, so to make people understand how they can use data and how that can be used to get insights on your day to day, so making sure that when we release a new feature, we can actually measure the impact instead of just saying, oh no, we think that's going to do great. But then you can't quantify that great. So I guess that's what we're trying to do here.

JON: That's awesome. And I've actually heard quite a bit about you. A bunch of Googlers actually told me about the work that you do, the work that FACEIT does. And we just had to have you on the episode today. So can you tell us a little bit more about what FACEIT is doing?

MARIA: So FACEIT is a competitive gaming platform, which I guess doesn't tell much if you're outside of gaming. So to clarify, I think you can think about FACEIT kind of like a marketplace in a way. So on one side, you have gamers, players that want to take part in different competitions that can range from our own proprietary matchmaking to a different type of tournament. So you've got leagues, you've got round robin championships, you've got this standard single-bracket tournament.

And then on the other side, we have lots of people in eSports and competitive gaming that are actually spending their lives organizing these competitions. And it ranges from guys at school at university that want to organize their own university league, to professional businesses. There's many organizations in the sector that are organizing different types of competitions.

What FACEIT does, it's just trying to get people together. So on one side we have players, and on the other side we have organizers. And we are just basically trying to get them to meet each other. So we have several system that on the platform actually facilitate players to find the right competition that kind of matches their own interests.

JON: Gotcha. So you've got like strong competition, online gaming. So I imagine everything always goes smoothly in these environments?

MARIA: Not really. So [INAUDIBLE] that have always been a core for us, but also for game publishers as well, because we work closely with a lot of game publishers. So one of the things that we have is a very smooth integration with the game. So it's very easy. With the click of a button, you got to FACEIT to the game.

And things are not always that smooth when you actually put several people that don't know each other that well into the same online scenario. I think toxicity has always been a big issue in gaming. It's defiantly an issue for us as well.

I think it's probably due to some of the aspects about matchmaking or in general the gaming experience. On FACEIT, for example, just to give you an example, we have 100,000 concurrent players at the same time. You need just 10 of them to make a match. And honestly, the likelihood that after this match you're going to encounter again exactly the same other nine players it's like close to zero.

Your probably social inhibitions that you would have in another environments, like probably a school, they are not going to just go into the class, start insulting somebody. Because then you're faced with that every day, no? But in gaming, well, I'm probably spending 30 minutes of my life with these other nine people. And I think the other guy has just done something that is really stupid that I'm probably going to say that to him. And I'm probably actually going to say I'm not such a polite way.

And I think that's, in the end, it's an issue to online behavior. Especially anywhere you're online, you're also with video games in general, but in other situations as well, you're yourself, but you're also using an author to go around. And I think already that puts some of the social inhibitions that you may be having in other contexts, those are lowered down.

JON: So you've got a whole group of people, everyone with kind of lowered social inhibitions in a stressful environment. Things can go poorly pretty easily, huh?

MARIA: Yes, if you get already people starting with lower inhibitions. And then you hear on top of that, especially on FACEIT-- the environment, it's really, really competitive because of different reasons-- on one side, we have [INAUDIBLE] to become a professional player and to actually get the occasion to be noticed. So as you rank up on FACEIT, you can end up in our FACEIT pro-league, where for [INAUDIBLE] for example, lots of professional players have actually been talent scouted from there, while in some tournaments you might be competing for money as well.

So I think the two things together, then you could have a bomb that it's about to explode in a way. There's like lots of tension that, especially if you're seeing that the match is not really going in your direction, then there's lots of tension then builds up. And the way that's expressed in some cases-- well, not all the times, but in quite a few cases-- can be pretty toxic, honestly.

JON: So I am a very competitive gamer myself. I actually had a meeting earlier this week with Brian, and I actually told them that I was gaming for 20 hours straight. And I can tell you that after gaming for 20 hours, your personality will definitely change. Although it's not really you that people see when they're gaming, I can say that after a while, once I was on a pretty good winning streak, and I was just really excited to finally be back to my rank that I believe I belong in, I can tell you that you start to judge people quite often.

So the question I really have is, what are you really doing to address the toxicity message? Because there are different forms of toxicity, throwing games, there's verbal, there's text. So what are you really doing to address that, and how does FACEIT address it?

MARIA: So I think we are trying to get quasi [? anti-normative ?] approach when it comes to that. So firstly, we started with [INAUDIBLE]. And that's something that we actually had a case study published by Jigsaw, how we are using and integrating Perspective API, so one of the Google tools into our system. But on the other hand, we do know that there is more to it.

So text is probably just one of the ways that toxicity's clearly surfacing. But as you were saying, there are just so many different behaviors that you can annoy people with. So if we take Counter-Strike, you could be griefing. So maybe you are just blocking the passage to somebody. There's voice chat, could kill a team mate. So there are so many different things.

So what we are really trying to is basically build a comprehensive system that is going to be able to analyze and understand each of these different behaviors. So we see text shot this just like the very first milestone for what we are actually building. And the one of the next step that we have, for example, is we are building a UI and that's going to involve our community to actually help us to understand what they see as toxic.

Because I think that's also the other problem that you're actually trying to solve is also toxicity is a perception of toxicity. So you don't have a clear cut that was definitely toxic, especially when it comes to certain behavior. There are still things that you always need to evaluate that's intention on one side, and actually if that was perceived as toxic. Because there's some situations where, for example, even if what you're saying, it could be in some scenarios considered toxic, but maybe if I'm in a group of friends, then that's qualified as banter.

So there's like all these different nuances that are not easy to understand on one side. And then, yeah, on the other side, what we're really trying to do, and what our system basically looks like, we're getting data from different sources. So we take text shot, we take voice chat, we also basically have the UI stream. And it should be stream of every event that happened in the game.

And we are recording these sets of events. So we know, for example, if in a match somebody started a vote to kick somebody off, we know if there was a team kill. We know if there were other signals. On the basis of all these different sources, the first step that we want to do is basically create cases that our players are going to help us label. So the first step is let's make this interface, like this system that we actually are going to release, and probably in a month it's going to go out.

But yeah. The first step would be let's get the input from our players to help us actually understand what's going on and label all this data. And then this is going to fit into probably into different models that are going to then be able to analyze each stream and then finally get like a comprehensive understanding of what really went on.

JON: OK, so just measuring all of this is a real challenge. You described a couple of scenarios. So those are some of your inputs, and then you get some of the labeling. Is it all just labeling?

MARIA: Labeling is just the first step. Then basically, what we're doing is-- so if we take the shot toxicity as an example-- we send almost 200 million messages to prospective API-- for each of those messages, we got the score back. Then we went in. We actually analyzed the scores that we were getting, because the score by itself, it might not necessarily mean that-- of course, the higher the score is close to 1, the more toxic that message should be perceived.

But the other part that we were interested in as well from our side was to also try to understand, OK, what is the full sentiment in that match? So it's not just about the single comment by itself, also that toxicity usually builds other toxicity. So we also wanted to take the whole shot into consideration. So on top of that, we actually built our own machine learning model to basically get to a final score for that user in that match.

On the basis of that score, then Minerva, which is basically our admin AI, so we are basically just trying to replicate what a human moderator would do if we were able to have somebody, have enough people to moderate 5,000 concurrent matches, or 10,000 concurrent matches. Then you probably would. But it doesn't really scale that well. So yeah, on the basis of this final score, then Minerva takes a certain action, either warns the player, or issues a ban.

I think it was the two side of things. One thing was firstly, implementing perspective in this case. Then it was a lot of understanding what that score actually mean, but also what that score means relative to our specific population, because especially with a system like Perspective, that was built for a broader scenario, that's an online perception of chat, and how each comment can affect the conversation, I think it's less important to then relate that with your own use case.

Because probably moderating a gaming community of 18 to 25 years old is very different than my grandma that today was wanted to read the article on "The Times," and was shocked about a specific comment that, honestly, in the context of gaming, that would have been totally accepted. So I think it's always like a little bit of differentiation that you need to do and that you need to consider.

And then I think on top of that is also, now you've got your machine learning model. You got your metrics. In our case, it's this metric that tries to define, let's say, the level of chat toxicity of the user in there. And then you could have another metric that is defining the level of voice toxicity the user had in the match. And then you are going to have a third metric that tries to understand their behavior. And then you can try to understand, well, you will need to try to understand how these metrics are going to coexist with each other.

In our case, what we are trying to do is punish the behavior rather than the user. So we are trying to make a distinction about you are being toxic versus you did something toxic. And we are trying to go in the second direction. So we are really trying to say you did something toxic that you shouldn't have done. But that doesn't qualify you as a person.

So in our case, in this example, if you were toxic in the chat and then probably like a team killed somebody, he would be [? issued ?] two different bans. And those bans and those warnings would be targeted to the behavior, because ultimately, what we are trying to do is also pinpoint exactly what you did wrong and allow you to understand what happened and possibly learn from it. That's the part that we're really interested in. And what we really want to try to do is actually finding a way-- instead of punishing toxicity, can we prevent toxicity, and can we do something that actually changes the behavior of our user, of our players, and potentially changes it for the better?

So, for us, it was really important. And that's something that was created with Perspective. For example, for the text shot and the model we are developing was actually also being able to tell the users, this is what you said, this is how it escalated, and this is why it's not acceptable. And we really valued that feedback that we can give.

JON: I'm kind of curious, because you've mentioned that you're currently using our Perspective API, and you're using this to build out a bunch of models. I'm curious-- aside from Perspective API, are you utilizing any other GCP products to address issues in gaming?

MARIA: Oh, yeah, lots of them.

JON: I've always thought about toxicity as something that is a real big issue in gaming, [? experiencing ?] it myself. How do you deal with it culturally as well, because you can see that maybe in certain cultures, certain phrasing and certain words are not considered an insult or toxic, whereas in another country, it may be completely unacceptable?

MARIA: So in terms of GCP product, you're using a lot of BigQuery, Google BigQuery. It's basically the basis of any machine learning that we want to do because that's where we do all our data exploration. So everything that happens on the platform, it's fed into BigQuery. So on BigQuery, we basically have all our data warehouse. It lives there.

Another tool that we're using a lot is Looker-- Google Cloud, I think, acquired a couple of months ago. So, for example, we're using that a lot for data visualization. So we usually do the first pre-processing inside query and then just block things in Looker, or we are using a lot of Cloud Datalab as well, especially because that really simplifies a lot. When you are tackling an issue like toxicity, we are tracking it a team of 10 people. So it was really important for us to be able to collaborate on Jupyter Notebooks as well.

And if you don't use Cloud Datalab, but you're building your own notebook locally, then you need to send that file [? into an ?] [? HTML ?] or you need to then load that file somehow, and then send it to the other person, who needs to work on it. Then you need to share it while-- you might be sharing the code base in GitHub. But sometimes, that's not enough.

So, for example, Datalab was really useful for us because A, it's integrated directly with BigQuery. So I can easily pull up SQL queries that you would run on BigQuery, and you can put them inside your notebook. So that's super easy. And then on the other side, it was very good for us because you can share them. And basically, you can have multiple people working on it. So that was great.

Then, of course, we're using the Cloud AI platform. So we're building our machine learning model our [INAUDIBLE]. We are using mainly TensorFlow. For example, one of the things that we really have there is, on FACEIT we generate lots of data. For example, perspective-- we had 200 million messages that we wanted to analyze and process them quickly. You can simply do that locally.

So being on Cloud AI and being able to spawn a virtual machine where you can basically process as many data as you want, depending on the machine you're actually spawning, it simplified the process a lot. I think it took a lot of headaches away that you don't really want the data science team to think of. When it comes to your architecture, you don't want to have the DevOps experts in our data science team that are thinking about how to set up a virtual machine. You Just want to get with a click, which is something that you can do on Cloud AI, which was great.

Yeah, and then actually, before we were using a tool for competitor. But we've seen today on your Cloud AI that you actually have a tool for data labeling. So we have started testing that out, which is in beta right now. So we have our data set being labeled right now on there.

JON: I'm still of curious about those cultural differences and how can you consider what's toxic and what's not.

MARIA: So I think there are several things. So on one side, when it comes to Perspective API, it actually has models in different languages. So right now, it supports Spanish, French, and English. And then there are three models that are in beta, which is, I think, Italian, Portuguese, and another one. And we have been talking with them a lot about Russian because we have a lot of Russian users. And that's one thing that we need to factor in.

Of course, right now, we are doing some approximation because one of the things, for example, we have to do to score a message was to first listen to Translate API. So that's another Google tool that we are using, getting a translation back. And then we score the translation. One thing that we did to try to minimize the errors in cultural difference was when we had the data set labeled, we had it labeled by people that were from that culture. So we were splitting the data so that if-- we were asking basically Russian people to review Russian messages, Italian people to review Italian messages, and so on, to actually get an understanding of the differences.

Right now, what we're really trying to do, which I think kind of minimize the problem in our use case, we are really trying to go after behavior that are really, really toxic. So when we release this first model into our production system, and then it started automatically warning and [? minding ?] people, to give you an idea, we are basically targeting the top 5% of most toxic behavior. So within that range, yes, cultural differences are still there, but not as much.

Because on the opposite side, I do think that a slight bit of toxicity, it's part of competitiveness. I don't know. It's like thinking that football players would like largely insult themselves on the field. It just feels like it can get to a point where you're just basically being the big brother, watching on everything, and censoring everything. You still kind of need to keep a balance, and you need to also to understand you can't go after everything because then it would kind of put yourself at the opposite end, I would say. And I don't--

JON: Yeah, you used the word banter earlier. [INAUDIBLE] you've got this going on.

MARIA: Yeah, so I think there's some banter going on. And I think you don't want to get into a position that your model is actually baning for banter. So you still want to find the middle-like error. Then yeah. There were things that were a bit tricky. For example, with using Perspective, or in general, if the gaming language has a lot of slang, so that was one of the tricky bits was actually to factor that slang into our model. So there were like some efforts on that side that we put in.

And then on the other side are things that are not acceptable. In other contexts, they're actually standard terms in a game like Counter-Strike Global Offensive. For example, if I say nice head shot, in Counter-Strike, it's actually meant to be a compliment. But outside of it, you might e inciting violence. So yeah.

JON: Context differences.

MARIA: Yeah.

JON: So toxicity is a huge challenge kind of everywhere online, right? And I think many of the folks who are listening are building and running systems where users interact with each other. Do you have recommendations for them on how to apply some of the things you've learned in this space?

MARIA: Yeah. Well, I think it's two things. On one side, before using this approach, we were trying to go down the approach that I think it's used by many that's more like building a karma system. So that tries to get an index that tries to represent how toxic of a player you are. I think in general, a karma system works good when you are trying to build that as an honor score, like more how good of a player you are rather than how bad of a player you are, or in general, not just in gaming, but everywhere, I think on one side is trying to enforce the positive.

Then on the other side, I also think, know your context. So know what you're dealing with, because I think toxicity can be so many different things. Even on, let's say, on a social network, I can be toxic by writing a very, very bad comment, or I can be toxic by posting a bad image, or I could post a video that is out of [INAUDIBLE].

And those are all different data signals. When I say know your context, it's also know how toxicity can express itself. And consider all these different sources and the fact that you kind of need to deal with all of them, because otherwise, people are smart. So if they find out you cannot right a swear word in your comment. They're probably going to put that swear word in your image if they notice that--

JON: Yeah, they'll figure something else out.

MARIA: --you're not detecting the image. So they're going to think of something else. So you need to know them, and you need to try to use as a comprehensive approach as you can. And then I think the other side is also celebrate the small wins. It's a huge issue. And I think it needs to be broken down into smaller ones.

And then you need to tackle each smaller one at a time. Because otherwise, if you're trying to just tackle everything and then go out with the super big system that tries to solve it all, it might not get the result. Instead, I'm thinking step by step. So just release the first small thing. See also which effect it has.

Because one thing is we are people. So we learn from things. So that's also one of the things you want to rely on, is that when you start pushing for change, then hopefully people are going to follow as well. So they are actually going to help you in what you're doing.

JON: Yeah. So you basically said think about behavior instead of kind of reputation overall thing. So what people are doing in the moment, that helps with these short-term things as well.

MARIA: Yeah, because I also think that it helps you in many ways. It helps into assessing criticism better. So I think that's always something I have been thinking of. And that's why that was one of the things that I really pushed and FACEIT embraced. And I think, yeah, we're basically like trying to build these. And then hopefully, maybe in the future this can become a tool that other game publishers can use.

But it's also really swapping from criticize you as a person, which is usually not taken that well. And also, who am I to criticize you as a person. Maybe you are just having a bad day. So I just want to point out that you did something wrong. And I think that's taking it a lot better.

Also, in a personal point of view in general, school or such as very different when you tell a kid you did something wrong, or instead, you tell them well, you're being a bad kid. I think--

JON: It's really interesting to hear about kind of aligning the software systems to what's actually just basically good interpersonal human behavior. It's kind of a classic recommendation to talk about the behavior rather than the person.

MARIA: Yeah, exactly. I think what we're trying to do is, actually, I think artificial intelligence and machine learning are as smart as people are, or they can be as smart as people are. And it's not artificial intelligence. It's these magic things. It depends on how you build it.

So what we are really trying to build with Minerva as well is that we can make her think the way we think in a way. So it's more like if there are practices that are good in human relations, why should we forget about them when you are in a technological field and we are trying to build?

JON: Love it. Love it, love it, love it. So if people are motivated and want to learn more about this space, do you have recommendations for where to go next?

BRIAN: Besides becoming a super competitive gamer.

[LAUGHTER]

MARIA: Yes, play a lot. So for sure. Well, there is the case study that was published today on the Jigsaw blog post. So for sure.

JON: OK, we'll put a link to that in the notes.

MARIA: Yeah, Perspective API, there's lots of good documentation there. There is also, like if you read about toxicity in gaming, there is lots of articles coming up about it. Actually, lots of studies from researchers in academia about it in general, like how to apply machine learning, and AI, and toxicity, but as a general model. And then you can always take that learning and see how they fit in your own space.

JON: Could we ask you for one to three recommendation we can put in the show notes for specific papers to look at?

MARIA: Yeah, sure. I can send those over.

JON: Is there anything that we've missed that you would like to mention before we wrap things up, or would you like to let listeners know where we can find you, or we can find any material that you're posting so that we can stay up to date with everything that FACEIT is doing?

MARIA: Yeah, sure. Well, I'm going to be at Google Next in London. So if anybody's there, happy to have a chat with anybody. I'm probably going to be at GDC as well. So FACEIT, you can find us on Medium, so blog.faceit.com. That's a good way to keep up to speed with what we are doing. I'm quite talkative, so there are some blogs about what we're doing around toxicity, what we want to do, and so on. And then faceit.com-- if you're a gamer and actually fancy a match, we'd be happy to see you there.

JON: And I know that you're a gamer. I've never publicly shared my gamer handle or what games I play. But if you would like your listeners to probably hop on the sticks and play with you, I don't know if you want to share your gamer handles or anything.

MARIA: Yeah. So FACEIT [? Lulu, ?] you can find me on "Lego Legends," "World of Warcraft." That's two man games, but usually you can find me around plenty of games.

JON: Awesome. And I guess I'll follow the same suit. And you can find me mostly on "Overwatch" and [INAUDIBLE]. So I'll definitely link my battle tags on that as well.

BRIAN: Oh, new information. Awesome.

MARIA: Yeah, that's really awesome. See? New things that we learned about Jon as well.

JON: Only find out here. Well, thank you so much for spending time with us and sharing. It really feels like you're pushing things forward to make things better for humans interacting online. So thank you so much.

MARIA: Oh, thank you so much for having me. It was a pleasure.

JON: So we would like to thank our guest, Maria, for joining us in this episode. It was really amazing to hear about all the cool things FACEIT is doing to pretty much end toxicity in gaming, or put a dent in it. I play competitive games quite often. And toxicity is one of the major things I've contributed myself-- I'll be honest. But it's really great to see someone's really trying to make a change in the gaming space.

BRIAN: I totally agree. And I'm personally really looking forward to seeing-- gaming is a particularly tough space for this. I'm looking forward to seeing the tools and techniques that come out of there hopefully branch into other areas, because we have challenging environments in pretty much all of our online social networks. So looking forward to seeing improvements there.

JON: And hopefully-- you know, games are meant to bring people together. Maybe FACEIT's platform will help bring people closer together by ending toxicity. So, Brian, our question this week is going to be directed straight at you.

[MUSIC PLAYING]

That question of the week was, when I SSH into my VM via different methods, I get a different username. So what can I do to make that static?

BRIAN: That can be super confusing because if you're by default-- and this is talking about SSHing into computer engine instances-- the user name is kind of just passed through from what your kind of host environment is. So if you're on the Cloud console, that's going to be the first part of your email address of whatever you're logged into Google Cloud is.

Then if you later go in SSH in from your laptop, you might have a different user name on your laptop. And then you're logging into the same machine, but you end up with two different user accounts on that end, and your dot files don't work the same way, or the permissions don't work the same way, and it gets all confusing. So there's a couple of options.

One, is that you could change all of things now. That's not super useful. When you SSH in from your laptop, and this is a feature that might not be that known. So just to talk about it a bit, as part of gcloud, there's a gcloud compute SSH and then an instance name. And it handles-- since you're already logged in with OAuth to Google Cloud, it handles the SSH key pair exchange and all that sort of stuff for you, and just does all the configurations you don't have to worry about it.

But one of the things it does in that automatic thing is figure out what user to use. And it just infers that from your laptop. So the fix is actually really simple, but hard to notice. So it's great if you know it's there, not if you don't. And it's, basically, you just add-- instead of just the instance name, user at instance name. And then that will set the remote user to log into, and you're good to go. So gcloud compute SSH user at instance. And that'll work.

So that's the short version. There is a more kind of managed version of SSH keys that kind of work at the organization level. That's called OS login. And if you're using that, when you submit your SSH key, your public key to use the first time, you can also specify the username that should be used on the instances there. And, of course, you're logging into usually a Linux box in this case. So you can use Unix commands, and you can always just kind of pseudo SU into the user you want on the other end as well.

JON: Awesome. Well, that's a handful. I'm glad that you can tackle all of those different method.

BRIAN: So that was kind of a lot of words. We'll put a couple links in the show notes to the docs with some examples.

JON: So thanks, Brian, for tackling all of those different methods of actually SSHing into your actual GCE instance. Glad that you can offer our listeners multiple, multiple, multiple methods, and the nuances to using each one. So, Brian, I'm curious. Where are you going to be? What are you doing in the upcoming months?

BRIAN: Yeah, so I've recently switched to being all in on Compute Engine and use cases around that. So as part of that, I'm kind of doing a bunch of learning and heads down time. So that's where the majority of my time is. But I'm also going to be at a conference called Supercomputing that's in Denver in November. Interesting conference-- it's a mix of academia and kind of industry stuff. So I'm excited about that.

JON: And I will be at a series of different places. I'll be at Anime NYC. That's just a personal thing for me in November right prior to going to KubeCon in November, where we're running our matchmaking and game server workshop. So if you're going to be there, please, check us out. And then I get to hang out with our [? eng ?] team for our Open Match [INAUDIBLE] and Google Kirkland in December. And then I'll probably end up taking a small, very, very small business trip to Montreal to hang out and meet some gaming folks. And then I go on vacation, finally. So yeah, really excited to wrap up my end of the year travel and have some fun.

BRIAN: Sounds good.

JON: Well, we would like to thank you all for listening. And, Brian, it's been a pleasure as always.

BRIAN: Likewise

JON: So we'll see you all next week.

[MUSIC PLAYING]

Hosts

Brian Dorsey and Jon Foust

Continue the conversation

Leave us a comment on Reddit