Artwork

محتوای ارائه شده توسط Kevin Knudson and Evelyn Lamb. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط Kevin Knudson and Evelyn Lamb یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal
Player FM - برنامه پادکست
با برنامه Player FM !

Episode 62 - Tai-Danae Bradley

31:50
 
اشتراک گذاری
 

Manage episode 282375011 series 1516226
محتوای ارائه شده توسط Kevin Knudson and Evelyn Lamb. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط Kevin Knudson and Evelyn Lamb یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal

Evelyn Lamb: Welcome to my favorite theorem, a math podcast. I'm Evelyn Lamb, one of your hosts. And here's your other host.
Kevin: Hi. I’m Kevin Knudson, professor of mathematics at the University of Florida. It's been a while. I haven't seen your smiling face in a while.
EL: Yeah. I've started experimenting more with home haircuts. I don't know if you can see.
KK: I can. It's a little a little longer on top.
EL: Yeah.
KK: And it's more of more of a high and tight thing going here. This is Yeah. All right. It looks good.
EL: Yeah, it's been kind of fun. And, like, depending on how long ago between washing it, it has different properties. So it's very, it's like materials science over here, too. So a lot of fun.
KK: Well, you probably can't tell, but I've gone from a goatee to a plague beard. And also, I've let my hair grow a good bit longer. I mean, now that I'm in my 50s, there's less of it than there used to be. But I am letting it grow longer, you know, because it's winter, right?
EL: Oh yeah. Your Florida winter. It's probably like, what? 73 degrees there?
KK: It is 66 today. It's chilly.
EL: Oh, wow. Yeah, gosh! Well, today we are very happy to invite Tai-Danae Bradley to the podcast. Hi, Tai-Danae. Will you tell us a little bit about yourself?
Tai-Danae Bradley: Yeah. Hi, Evelyn. Hi, Kevin. Thank you so much for having me here. So I am currently a postdoc at X. People may be more familiar with his former name, Google X. Prior to that, I recently finished my PhD at the CUNY Graduate Center earlier this year. And I also enjoy writing about math on a website called math3ma.
EL: Yes, and the E of that is a 3 if you're trying to spell it.
TDB: Yeah, m-a-t-h-3-m-a. That's right. I pronounce it mathema. Some people say math-three-ma, but you know.
EL: Yeah, I kind of like saying math-three-ma my head. So, I guess, not to not to sound rude. But what does X want with a category theorist?
TDB: Oh, that's a great question. So yeah, first, I might say for all of the real category theorists listening, I may humbly not refer to myself as a category theorist. I'm more of, like, an avid fan of category theory.
KK: But you wrote a book!
TDB: Yeah, I did. I did. No, I really enjoy category theory, I guess I'll say. So at X, I work on a team of folks who are using ideas from—now this may sound left field—but they're using ideas from physics to tackle problems in machine learning. And when I was in graduate school at CUNY, my research was using ideas in mathematics, including category theory, to sort of tackle similar problems. And so you can see how those could kind of go hand in hand. And so now that I'm at X, I'm really just kind of continuing the same research interest I had, but, you know, in this new environment.
EL: Okay, cool.
KK: Very cool.
EL: Yeah, mostly, we've had academics on the podcast. We’ve had a few people who work in other industries, but it's nice to see what's out there, like, even a very abstract field can get you an applied job somewhere.
TDB: Yeah, that's right.
EL: Yeah, well, of course, we did invite you here to talk about your job. But we also invited you here to ask what your favorite theorem is.
TDB: Okay. Thank you for this question. I'm so excited to talk about this. But I will say, I tend to be very enthusiastic about lots of ideas in mathematics at lots of different times. And so my favorite theorem or result usually depends on the hour of the day. Like, whatever I’m reading at the time, like, this is so awesome! But today, I thought it'd be really fun to talk about the singular value decomposition in linear algebra.
KK: Awesome!
TDB: Yeah. So I will say, when I was an undergrad, I did not learn about SVD. So I think my undergrad class stopped just before that. And so I had to wait to learn about all of its wonders. So for people who are listening, maybe I could just say it's a fundamental result that says the following, simply put. Any matrix whatsoever can be written as a product of three matrices. And these three matrices have nice properties. Two of them, the ones on the left and the right, are unitary matrices, or orthogonal if your matrix is real. And then the middle matrix is a diagonal matrix. And the terminology is if you look at the columns of the two unitary matrices, these are called the singular vectors of your original matrix. And then the entries of the diagonal matrix, those are called the singular values of that matrix. So unlike something like an eigen decomposition, you don't have to make any assumptions about the matrix you started with. It doesn't have to have some special properties for this to work. It's just a blanket statement. Any matrix can be factored in this way.
EL: Yeah, and I, as we were saying, before we started recording, I also did not actually encounter this in any classes.
KK: Nor did I.
EL: And yeah, it’s something I've heard of, but not never really looked into because I didn't ever do linear algebra, you know, as part of my thesis or something like that. But yeah, okay, so it seems a little surprising that there aren't any extra restrictions on what kind of matrices can do this. So why is that? I don't know if that question is too far from left field.
TDB: Maybe that's one of the, you know, many amazing things about SVD is that you don't have to make any assumptions. So number one, in mathematics, we usually say multiplying things is pretty easy, but factorizing is hard. Like, it's hard to factor something. But here in linear algebra, it's like, oh, things are really nice. You just have this matrix, and you get a factorization. That's pretty amazing. I think, maybe to connect why is that—to connect this with maybe something that's more familiar, we could ask, what are those singular vectors? Where do they come from? Or, you know, what's the proof sketch of this?
EL: Yeah.
TDB: And essentially, what you do is you take your matrix, you multiply it by its transpose. And that thing is going to be this nice real symmetric matrix, and that has eigenvectors. And so the eigenvectors of that matrix are actually the singular vectors of your original one. Now, depending on like, if you multiply them the transpose of the matrix on the left or right, that will determine whether, you know, you get the left or right singular vectors. So, you might think that SVD is, like, second best: “Oh, not every matrix is square, so, we can't talk about eigenvectors, oh, I guess singular vectors will have to do.” But actually, it's like picking up on this nice spectral decomposition theorem that we like. And I think when one looks out into the mathematical/scientific/engineering landscape, and you see SVD sort of popping up all over the place, it's pretty ubiquitous. And so that sort of suggests it’s not a second-class citizen. It's really a first-class result.
EL: Yeah. Well, that's funny, because I did, when I was reading it, I was like, “Oh, I guess this is a nice consolation prize for not being an invertible square matrix, is that you can do this thing.” But you're telling me that that was—that’s not a good attitude to have about this?
TDB: Well, yeah, I think SVD, I wouldn't think of it as a consolation prize, I think it is quite something really fundamental. You know, if you were to invite linear algebra onto this podcast and ask linear algebra, what its favorite theorem is, just based on the ubiquity and prevalence of SVD in nature, I'd probably bet linear algebra would say singular value decomposition.
EL: Yeah, can can we get them next?
KK: Can we get linear algebra on? We’ll see. Okay, so I don't know if this question has—it must have an answer. So say your matrix is square in the first place. So you could talk about the eigenvalues, and you do this, I assume the singular values are different from the eigenvalues. So what would be the advantage of choosing the singular values over the eigenvalues, for example?
TDB: So I think if your matrix is square, and symmetric, or Hermitian, then the eigenvectors correspond to the singular vectors.
KK: Okay, that makes sense.
TDB: But, that's a good question, Kevin. And I don't have a good answer that I could confidently go on record with.
KK: That’s cool. Sorry. I threw a curveball.
TDB: That’s a great question.
KK: Because then singular values are important. The way I've always sort of heard it was that they sort of act like eigenvalues in the sense that you can line them up and that the biggest one matters the most.
TDB: Exactly, exactly. Right. And in fact, I mean, that sort of goes back to the proof that we were talking about. I was saying, oh, the singular vectors are the eigenvectors of this matrix multiplied by its transpose. And the singular vectors turn out to be the square roots of the eigenvalues of that square matrix that you got. So they're definitely related.
KK: Okay. All right. Very cool. So what drew you to this theorem? Why this theorem in particular?
TDB: Yeah, why this theorem? So this kind of goes back to what we were talking about earlier. I really like this theorem because it's very parallel to a construction in category theory.
KK: Yes.
TDB: Maybe people find that very surprising. We're talking about SVD. And all of a sudden, here's this category theory, curveball.
EL: Yeah, because I really do feel like linear algebra almost feels like some of the most tangible math., and category theory, to me, feels like some of the least tangible.
KK: So wait, wait, are you going to tell us this is the Yoneda lemma for linear algebra?
TDB: No. Although that was going to be my other favorite theorem. Okay, so I'm excited to share this with you. I think this is a really nice story. So I'm going to try my best because it can get heavy, but I'm going to try to keep it really light. But I might omit details, but you know, people can maybe look further into this.
So to make the connection, and to keep things relatively understandable, let's forget for a second that I even mentioned category theory. So let’s empty our brains of linear algebra and category theory. I just want to think about sets for a second. So let me just give a really simple, simple construction. Suppose we have two sets. Let's say they're finite, for simplicity. And I'll call them a set X and a set Y. And suppose I have a relation between these two sets, so a subset of the cartesian product. And just for simplicity, or fun, let’s think of the elements of the set X as objects. So maybe animals: cat, dog, fish, turtle, blah, blah. And let's also think of elements in the set Y as features or attributes, like, “has four legs,” “is furry,” “eats bugs,” blah, blah, blah. Okay. Now, given any relation—any subset of a Cartesian product of sets—you can always ask the following simple question. Suppose I have a subset of objects. You can ask, “Hey, what are all the features that are common to all of those objects in my subset?” So you can imagine in your subset, you have an object, that object corresponds to a set of features, only the ones possessed by that object. And now just take the intersection over all objects in your subset? That's a totally natural question you could ask. And you can also imagine going in the other direction, and asking you the same question. Suppose you have a subset of features. And you want to know, “Hey, what are all of the objects that share all of those features in that subset I started with?” A totally natural question you could ask anytime you have a relation.
Now, this leads to a really interesting construction. Namely, if someone were to give me any subset of objects and any subset of features, you could ask, “Does this pair satisfy the property that these two sets are the answers to those two questions that I asked?” Like, I had my set of objects and, Oh, is this set of features that you gave me only the ones corresponding to this set of objects and vice versa? Pairs of subsets for which the answer is yes, that satisfy that property, they have a special name. They're called formal concepts. So you can imagine like, oh, the concept of, you know, “house pet” is like the set of all {rabbits, cats, dogs}, and, like, the features that they share is “furry,” “sits in your lap,” blah, blah, blah. So this is not a definition I made up, you know, you can go on Wikipedia and look at formal concept analysis. This is part of that. Or you can usually find this in books on lattice theory and order theory. So formal concepts are these nice things you get from a relation between two sets.
Now, what in the world does this have to do with linear algebra or category theory, blah, blah, blah? So here's the connection. Probably you can see it already. Anytime you have a relation, that’s basically a matrix. It's a matrix whose entries are 0 and 1. You can imagine a matrix where the rows are indexed by objects and the columns are indexed by your features. And there's a 1 and the little x little y entry if that object has that feature and 0 otherwise.
KK: Sure.
TDB: And it turns out that these formal concepts that you get are very much like the eigenvectors of that 0-1 matrix multiplied by its transpose. AKA, they're like the singular vectors of your relation. So I'm saying it turns out—so I'm kind of asking you to believe me, and I'm not giving you any reason to see why that should be true—But it's sort of, when you put pen to paper paper and you work out all of the details, you can sort of see this. But I say it's like because if you just do the naive thing, and think of your, your 0-1 matrix as a linear map, like as a linear transformation, you could say, okay, you know, should I view this as a matrix over the reals? Or maybe I want to think of 0 and 1 as you know, the finite field with two elements. But if you try to work out the linear algebra and say, oh, formal concepts are eigenvectors, it doesn't work. And you can sort of see why that is. we started the conversation with sets, not vector spaces. So this formal concept story is not a story about linear algebra, i.e., the conversation is not occurring in the world of linear algebra. And so if you have mappings—you know, from sets of objects to sets of features—the kind of structure you want that to preserve is not linearity, because we started with sets. So we weren't talking about linear algebra.
So what is it? It turns out it's a different structure. Maybe for the sake of time, it's not really important what it is, or if you ask me, I'll be happy to tell you. But just knowing there's another kind of structure that you'd like this map to preserve, and under that right sort of context, when you're in the right context, you really do see, oh, wow, these formal concepts are really like eigenvectors or singular vectors in this new context.
Now, anytime you have a recipe, or a template, or a context, but you can just sort of substitute out the ingredients for something else, I mean, there's a bet that category theory is involved. And indeed, that's the case. So it turns out that this mapping, this sort of dual mapping from objects to features, and then going back features to objects, that, it turns out, is an example of adjunction in category theory. So there's a way to view sets as categories. And there's a way to view mappings between them as functors. And an adjunction in category theory is like a linear map and its adjoint, or like a matrix and its transpose. So in category theory, an adjunction is — let me say it this way, in linear algebra, an adjoint is defined by an equation involving an inner product. Linear adjoint, there's a special equation that your map and its adjoint must satisfy. And in category theory, it's very analogous. It's a functor that satisfies an “equation” that looks a lot like the adjoint equation in linear algebra. And so when you unravel all of this, it's almost like Mad Libs, you have, like, this Mad Lib template. And if you erase, you know, the word “matrix” and substitute in the whatever categorical version of that should be, you get the thing in category theory, but if you stick in “matrix,” oh, you get linear algebra. If you erase, you know, eigenvectors, you get formal concepts, or whatever the categorical version of that is, but if you if you have eigenvectors, then that's linear algebra. So it's almost like this mirror world between the linear algebra that we all know and love, and like, Evelyn, you were saying, it's totally concrete. But then if you just swap out some of the words, like you just substitute some of the ingredients in this recipe, then you recover a construction in category theory, and I am not sure if it's well known — I think among the experts in category theory it is — but it's something that I really enjoy thinking about. And so that's why I like SVD.
EL: So I think you may have had the unfortunate effect of me now thinking of category theory as the Mad Libs of math. Category theorists are just going and erasing whatever mathematical structure you had and replacing it with some other one.
KK: That’s what a category is supposed to do, right? I mean, it's this big structure that just captures some big idea that is lurking everywhere. That's really the beautiful thing, and the power, of the whole subject.
TDB: Yeah, and I really like this little Mad Lib exercise in particular, because it's kind of fun to think of singular vectors as analogous to concepts, which could sort of maybe explain why it's so ubiquitous throughout the scientific landscape. Because you have this matrix, and it’s sort of telling you what goes with what. I have these correlations, maybe I organize them into a matrix matrix, I have data and organize it into a matrix. And SVD sort of nicely collects the patterns, or correlations, or concepts in the data that's represented by our matrix. And, I think, Kevin, earlier you were saying how singular values sort of convey the importance of things based on how big they are. And those things, I think, are a little bit like the concepts, maybe. That’s sort of reaching far, but I think it's kind of a funny heuristic that I have in mind.
KK: I mean, the company you work for is very famous for exploiting singular values, right?
TDB: Exactly. Exactly.
KK: Yep. So another fun part of this podcast is we ask our guests to pair their favorite theorem with something. So what pairs well with SVD?
TDB: Okay, great question. I thought a lot about this. But I, like, had this idea and then scratched it off, then I had another idea and scratched it off. So here's what I came up with. Before I tell you what, I want to pair it pair this with, I should say, for background reasons, this, Mad Libs or ingredients-swapping recipe-type thing is a little bit mysterious to me. Because while the linear algebra is analogous to the category theory, the category theory doesn't really subsume the linear algebra. So usually, when you see the same phenomena occurring a bunch of places throughout mathematics, you think, “Oh, there must be some unifying thread. Clearly something is going on. We need some language to tell us why do I keep seeing the same construction reappearing?” And usually category theory lends a hand in that. But in this case, it doesn't. There's no—in other words, it's like I have two identical twins, and yet they don’t, I don’t know, come from the same parents or something.
KK: Separated at the birth or something?
TDB: Yeah. Something like that. Yeah, exactly. They’re, like, separated to birth, but you're like, “Oh, where are their parents? Where were they initially together?” But I don't know that, that hasn't been worked out yet. So it's a little bit mysterious to me. So here it is: I'm going to pair SVD with, okay. You know, those dum-dum lollipops?
KK: Yeah, at the bank.
TDB: Okay. Yeah, exactly. Exactly. Just for listeners, that’s d-u-m, not d-u-m-b. I feel a little bit—anyway. Okay, so the dum-dum lollipops, they have this mystery flavor.
KK: They do.
TDB: Right, which is like, I can't remember, but I think it's wrapped up with a white wrapper with question marks all over it.
EL: Yeah.
TDB: And you're letting it dissolve in your mouth. You're like, well, I don't really know what this is. I think it’s, like, blueberry and watermelon? Or I don't know. Who knows what this is? Okay. So this mystery that I'm struggling to explain is a little bit like my mathematical dum-dum lollipop mystery flavor. So, you know, I like to think of this as a really nice, tasty mathematical treat. But it's shrouded in this wrapper with question marks over it. And I'm not quite really sure what's going on, but boy, is it cool and fun to think about!
EL: I like that. Yeah, it's been a while since I went to the bank with my mom, which was my main source of dum-dum lollipops.
TDB: Same, exactly. That's funny, with my mom as well.
EL: Yeah. That that's just how children obtain dum-dums.
KK: Can you even buy them anywhere? I mean, that’s the only place that they actually exist.
EL: I mean, wherever, bank supply stores, you know, get a big safe, you can get those panic buttons for if there's a bank robber, and you can get dum-dum lollipops. This is what they sell.
TDB: That’s right.
KK: No, it must be possible to get them somewhere else, though. When I was a kid trick-or-treating back in the 70s, you know, there would always be that cheap family on the on the block that would either hand out bubblegum, or dum-dums. Or even worse, candy corn.
EL: I must admit I do enjoy candy corn. It's not unlike eating flavored crayons, but I’m into it. Barely flavored. Basically just “sweet” is the flavor.
KK: That’s right.
EL: Yeah, well, so actually, this raises a question. I have not had a dum-dum in a very long time. And so is the mystery flavor always the same? Or do they just wrap up some normal flavor?
KK: Oh, that’s a good question.
EL: Like, it falls off the assembly line and they wrap it in some other thing. I never paid enough attention. I also targeted the root beers, mostly. So I didn't eat a whole lot of mystery ones because root beer is the best dum-dum.
KK: You and me! I was always for the root beer. Absolutely.
EL: And butterscotch. Yeah.
TDB: Oh, yeah. The butterscotch are good. So Evelyn, I was asking that same question to myself just before we started recording. I did a quick google search. And I think what happens, at least in some cases, like maybe in the past—and also don't quote me on this because I don't work at a dum-dum factory—but I think it was like, oh, when we're making the, I don't know, cherry or butterscotch flavored ones, but then the next in line are going to be root beer or whatever, we’re not going to clean out all of the, you know, whatever. So if people get the transition flavor from one recipe into the other, we’ll just slap on the “mystery.” I don't know, someone should figure this out.
KK: Interesting.
EL: I don't want to find out the answer because I love that answer.
KK: I like that answer too.
EL: I don't want the possibility that it's wrong, I just want to believe in that. That is my Santa Claus.
KK: And of course, now I’m thinking of those standard problems in the differential equations course where you’re, like, you're doing those mixing problems, right? So you've got, you know, cherry or whatever, and then you start to infuse it with the next flavor. And so for a while, there's going to be this stretch of, you know, varying amounts of the two, and then finally, it becomes the next flavor.
TDB: Exactly.
EL: Well, can you quantify, like, what amount and which flavor dominates and some kind of eigenflavor? I'm really reaching here.
TDB: I love that idea.
EL: Yeah. Oh, man. I kind of want to eat dum-dums now. That’s not one of my normal candies that I go to.
TDB: I know, I haven't had them for years, I think.
KK: Yeah, well, we still have the leftover Halloween candy. So this is, we can tell our listeners—What is this today? It's November 19?
EL: 19th, yeah.
KK: Right. So yeah, we bought one bag of candy because we never get very many trick-or-treaters anyway. And this year, we had one small group. And so we bought a bag of mini chocolate bars or whatever. And it's fun. We have a two-story house. We have a balcony on the front of our house. So this group of kids came up and we lowered candy from our balcony down. When I say “we” I mean my wife. I was cooking dinner. But we still have this bag. We're not candy-eaters. But you're right. I'm jonesing for for a dum-dum now. I do need to go to the bank. But I feel a little cheap asking for one.
EL: Yeah. I feel like, you know, maybe 15, 16, is where you kind of start aging out of bank dum-dums.
KK: Yep, yeah. Sort of like trick-or-treating.
EL: Well, anyway, getting back to math. Have we allowed you to say what you wanted to say about the singular value decomposition?
TDB: Yeah. I mean, I could talk for hours about SVD and all the things, but I think for the sake of listeners’ brains, I don't want to cause anyone to implode. I think I shared a lot. Category theory can be tough. So I mean, it appears in lots and lots of places. I originally started thinking of this because it cropped up in my thesis work, my PhD work, which not only involved a mixture of category theory, but linear algebra for, essentially, things in quantum mechanics. And so you actually see these ideas appear in sort of, you know, “real-world” physical scenarios as well. Which is why, again, it was kind of drawing me to this mystery. Like, wow, why does it keep appearing in all of these cool places? What's going on? Maybe category theory has something to say about it. So just a treat for me to think about.
EL: Yeah. And if our listeners want to find out more about you and follow you online or anything, where can they look?
TDB: Yeah, so they can look in a few places. Primarily, my blog mathema. com. I'm also on Twitter, @mathema as well, Facebook and Instagram too.
EL: And what is your book? Please plug your book.
TDB: Thank you. Thank you so much. Right. So I recently co-authored a book. It’s a graduate-level book on point-set topology from the perspective of category theory. So the title of the book is Topology: A Categorical Approach. And so this is really—we had in mind, sorry about this with John Terilla, who was my PhD thesis advisor, and Tyler Bryson, who is also a student of John at CUNY. And we really wrote this for, you know, if you're in a first-semester topology course in your first year of graduate school. So basic topology, but we were kind of thinking, oh, what's a way to introduce category theory that’s sort of gentler than just: “Blah. Here’s a book. Read all about category theory!” We wanted to take something that people were probably already familiar with, like basic point-set. Maybe they learned that in undergrad or maybe from a real analysis course, and saying, “Hey, here's things you already know. Now, we're just going to reframe the thing you already know in sort of a different perspective. And oh, by the way, that perspective is called category theory. Look how great this is.” So giving folks new ways to think and contemplate things they already know, and sort of welcoming them or inviting them into the world of category theory in that way.
KK: Nice.
EL: Yeah. So definitely check that out if you're interested in—the way you said like “Blah, category theory” —he other day, for some reason, I was thinking about the Ice Bucket Challenge from, like, I don't know, five or six years ago, where people poured the ice on their head for ALS research. (You’re also supposed to give money because pouring ice on your head doesn't actually help ALS research.)
TDB: Right.
EL: But yeah, it's like this is an alternative to the Ice Bucket Challenge of category theory.
TDB: That’s right. That's a great way to put it. Exactly.
EL: Yeah. Well, thank you so much for joining us. It was fun.
KK: This was great fun. Yeah.
On this episode, we had the pleasure of talking with Tai-Danae Bradley, a postdoc at X, about the singular value decomposition. Here are some links you might find relevant:

Bradley's website, math3ma.com
Her Twitter, Facebook, and Instagram accounts

The book she co-wrote, Topology: A Categorical Approach

  continue reading

93 قسمت

Artwork

Episode 62 - Tai-Danae Bradley

My Favorite Theorem

345 subscribers

published

iconاشتراک گذاری
 
Manage episode 282375011 series 1516226
محتوای ارائه شده توسط Kevin Knudson and Evelyn Lamb. تمام محتوای پادکست شامل قسمت‌ها، گرافیک‌ها و توضیحات پادکست مستقیماً توسط Kevin Knudson and Evelyn Lamb یا شریک پلتفرم پادکست آن‌ها آپلود و ارائه می‌شوند. اگر فکر می‌کنید شخصی بدون اجازه شما از اثر دارای حق نسخه‌برداری شما استفاده می‌کند، می‌توانید روندی که در اینجا شرح داده شده است را دنبال کنید.https://fa.player.fm/legal

Evelyn Lamb: Welcome to my favorite theorem, a math podcast. I'm Evelyn Lamb, one of your hosts. And here's your other host.
Kevin: Hi. I’m Kevin Knudson, professor of mathematics at the University of Florida. It's been a while. I haven't seen your smiling face in a while.
EL: Yeah. I've started experimenting more with home haircuts. I don't know if you can see.
KK: I can. It's a little a little longer on top.
EL: Yeah.
KK: And it's more of more of a high and tight thing going here. This is Yeah. All right. It looks good.
EL: Yeah, it's been kind of fun. And, like, depending on how long ago between washing it, it has different properties. So it's very, it's like materials science over here, too. So a lot of fun.
KK: Well, you probably can't tell, but I've gone from a goatee to a plague beard. And also, I've let my hair grow a good bit longer. I mean, now that I'm in my 50s, there's less of it than there used to be. But I am letting it grow longer, you know, because it's winter, right?
EL: Oh yeah. Your Florida winter. It's probably like, what? 73 degrees there?
KK: It is 66 today. It's chilly.
EL: Oh, wow. Yeah, gosh! Well, today we are very happy to invite Tai-Danae Bradley to the podcast. Hi, Tai-Danae. Will you tell us a little bit about yourself?
Tai-Danae Bradley: Yeah. Hi, Evelyn. Hi, Kevin. Thank you so much for having me here. So I am currently a postdoc at X. People may be more familiar with his former name, Google X. Prior to that, I recently finished my PhD at the CUNY Graduate Center earlier this year. And I also enjoy writing about math on a website called math3ma.
EL: Yes, and the E of that is a 3 if you're trying to spell it.
TDB: Yeah, m-a-t-h-3-m-a. That's right. I pronounce it mathema. Some people say math-three-ma, but you know.
EL: Yeah, I kind of like saying math-three-ma my head. So, I guess, not to not to sound rude. But what does X want with a category theorist?
TDB: Oh, that's a great question. So yeah, first, I might say for all of the real category theorists listening, I may humbly not refer to myself as a category theorist. I'm more of, like, an avid fan of category theory.
KK: But you wrote a book!
TDB: Yeah, I did. I did. No, I really enjoy category theory, I guess I'll say. So at X, I work on a team of folks who are using ideas from—now this may sound left field—but they're using ideas from physics to tackle problems in machine learning. And when I was in graduate school at CUNY, my research was using ideas in mathematics, including category theory, to sort of tackle similar problems. And so you can see how those could kind of go hand in hand. And so now that I'm at X, I'm really just kind of continuing the same research interest I had, but, you know, in this new environment.
EL: Okay, cool.
KK: Very cool.
EL: Yeah, mostly, we've had academics on the podcast. We’ve had a few people who work in other industries, but it's nice to see what's out there, like, even a very abstract field can get you an applied job somewhere.
TDB: Yeah, that's right.
EL: Yeah, well, of course, we did invite you here to talk about your job. But we also invited you here to ask what your favorite theorem is.
TDB: Okay. Thank you for this question. I'm so excited to talk about this. But I will say, I tend to be very enthusiastic about lots of ideas in mathematics at lots of different times. And so my favorite theorem or result usually depends on the hour of the day. Like, whatever I’m reading at the time, like, this is so awesome! But today, I thought it'd be really fun to talk about the singular value decomposition in linear algebra.
KK: Awesome!
TDB: Yeah. So I will say, when I was an undergrad, I did not learn about SVD. So I think my undergrad class stopped just before that. And so I had to wait to learn about all of its wonders. So for people who are listening, maybe I could just say it's a fundamental result that says the following, simply put. Any matrix whatsoever can be written as a product of three matrices. And these three matrices have nice properties. Two of them, the ones on the left and the right, are unitary matrices, or orthogonal if your matrix is real. And then the middle matrix is a diagonal matrix. And the terminology is if you look at the columns of the two unitary matrices, these are called the singular vectors of your original matrix. And then the entries of the diagonal matrix, those are called the singular values of that matrix. So unlike something like an eigen decomposition, you don't have to make any assumptions about the matrix you started with. It doesn't have to have some special properties for this to work. It's just a blanket statement. Any matrix can be factored in this way.
EL: Yeah, and I, as we were saying, before we started recording, I also did not actually encounter this in any classes.
KK: Nor did I.
EL: And yeah, it’s something I've heard of, but not never really looked into because I didn't ever do linear algebra, you know, as part of my thesis or something like that. But yeah, okay, so it seems a little surprising that there aren't any extra restrictions on what kind of matrices can do this. So why is that? I don't know if that question is too far from left field.
TDB: Maybe that's one of the, you know, many amazing things about SVD is that you don't have to make any assumptions. So number one, in mathematics, we usually say multiplying things is pretty easy, but factorizing is hard. Like, it's hard to factor something. But here in linear algebra, it's like, oh, things are really nice. You just have this matrix, and you get a factorization. That's pretty amazing. I think, maybe to connect why is that—to connect this with maybe something that's more familiar, we could ask, what are those singular vectors? Where do they come from? Or, you know, what's the proof sketch of this?
EL: Yeah.
TDB: And essentially, what you do is you take your matrix, you multiply it by its transpose. And that thing is going to be this nice real symmetric matrix, and that has eigenvectors. And so the eigenvectors of that matrix are actually the singular vectors of your original one. Now, depending on like, if you multiply them the transpose of the matrix on the left or right, that will determine whether, you know, you get the left or right singular vectors. So, you might think that SVD is, like, second best: “Oh, not every matrix is square, so, we can't talk about eigenvectors, oh, I guess singular vectors will have to do.” But actually, it's like picking up on this nice spectral decomposition theorem that we like. And I think when one looks out into the mathematical/scientific/engineering landscape, and you see SVD sort of popping up all over the place, it's pretty ubiquitous. And so that sort of suggests it’s not a second-class citizen. It's really a first-class result.
EL: Yeah. Well, that's funny, because I did, when I was reading it, I was like, “Oh, I guess this is a nice consolation prize for not being an invertible square matrix, is that you can do this thing.” But you're telling me that that was—that’s not a good attitude to have about this?
TDB: Well, yeah, I think SVD, I wouldn't think of it as a consolation prize, I think it is quite something really fundamental. You know, if you were to invite linear algebra onto this podcast and ask linear algebra, what its favorite theorem is, just based on the ubiquity and prevalence of SVD in nature, I'd probably bet linear algebra would say singular value decomposition.
EL: Yeah, can can we get them next?
KK: Can we get linear algebra on? We’ll see. Okay, so I don't know if this question has—it must have an answer. So say your matrix is square in the first place. So you could talk about the eigenvalues, and you do this, I assume the singular values are different from the eigenvalues. So what would be the advantage of choosing the singular values over the eigenvalues, for example?
TDB: So I think if your matrix is square, and symmetric, or Hermitian, then the eigenvectors correspond to the singular vectors.
KK: Okay, that makes sense.
TDB: But, that's a good question, Kevin. And I don't have a good answer that I could confidently go on record with.
KK: That’s cool. Sorry. I threw a curveball.
TDB: That’s a great question.
KK: Because then singular values are important. The way I've always sort of heard it was that they sort of act like eigenvalues in the sense that you can line them up and that the biggest one matters the most.
TDB: Exactly, exactly. Right. And in fact, I mean, that sort of goes back to the proof that we were talking about. I was saying, oh, the singular vectors are the eigenvectors of this matrix multiplied by its transpose. And the singular vectors turn out to be the square roots of the eigenvalues of that square matrix that you got. So they're definitely related.
KK: Okay. All right. Very cool. So what drew you to this theorem? Why this theorem in particular?
TDB: Yeah, why this theorem? So this kind of goes back to what we were talking about earlier. I really like this theorem because it's very parallel to a construction in category theory.
KK: Yes.
TDB: Maybe people find that very surprising. We're talking about SVD. And all of a sudden, here's this category theory, curveball.
EL: Yeah, because I really do feel like linear algebra almost feels like some of the most tangible math., and category theory, to me, feels like some of the least tangible.
KK: So wait, wait, are you going to tell us this is the Yoneda lemma for linear algebra?
TDB: No. Although that was going to be my other favorite theorem. Okay, so I'm excited to share this with you. I think this is a really nice story. So I'm going to try my best because it can get heavy, but I'm going to try to keep it really light. But I might omit details, but you know, people can maybe look further into this.
So to make the connection, and to keep things relatively understandable, let's forget for a second that I even mentioned category theory. So let’s empty our brains of linear algebra and category theory. I just want to think about sets for a second. So let me just give a really simple, simple construction. Suppose we have two sets. Let's say they're finite, for simplicity. And I'll call them a set X and a set Y. And suppose I have a relation between these two sets, so a subset of the cartesian product. And just for simplicity, or fun, let’s think of the elements of the set X as objects. So maybe animals: cat, dog, fish, turtle, blah, blah. And let's also think of elements in the set Y as features or attributes, like, “has four legs,” “is furry,” “eats bugs,” blah, blah, blah. Okay. Now, given any relation—any subset of a Cartesian product of sets—you can always ask the following simple question. Suppose I have a subset of objects. You can ask, “Hey, what are all the features that are common to all of those objects in my subset?” So you can imagine in your subset, you have an object, that object corresponds to a set of features, only the ones possessed by that object. And now just take the intersection over all objects in your subset? That's a totally natural question you could ask. And you can also imagine going in the other direction, and asking you the same question. Suppose you have a subset of features. And you want to know, “Hey, what are all of the objects that share all of those features in that subset I started with?” A totally natural question you could ask anytime you have a relation.
Now, this leads to a really interesting construction. Namely, if someone were to give me any subset of objects and any subset of features, you could ask, “Does this pair satisfy the property that these two sets are the answers to those two questions that I asked?” Like, I had my set of objects and, Oh, is this set of features that you gave me only the ones corresponding to this set of objects and vice versa? Pairs of subsets for which the answer is yes, that satisfy that property, they have a special name. They're called formal concepts. So you can imagine like, oh, the concept of, you know, “house pet” is like the set of all {rabbits, cats, dogs}, and, like, the features that they share is “furry,” “sits in your lap,” blah, blah, blah. So this is not a definition I made up, you know, you can go on Wikipedia and look at formal concept analysis. This is part of that. Or you can usually find this in books on lattice theory and order theory. So formal concepts are these nice things you get from a relation between two sets.
Now, what in the world does this have to do with linear algebra or category theory, blah, blah, blah? So here's the connection. Probably you can see it already. Anytime you have a relation, that’s basically a matrix. It's a matrix whose entries are 0 and 1. You can imagine a matrix where the rows are indexed by objects and the columns are indexed by your features. And there's a 1 and the little x little y entry if that object has that feature and 0 otherwise.
KK: Sure.
TDB: And it turns out that these formal concepts that you get are very much like the eigenvectors of that 0-1 matrix multiplied by its transpose. AKA, they're like the singular vectors of your relation. So I'm saying it turns out—so I'm kind of asking you to believe me, and I'm not giving you any reason to see why that should be true—But it's sort of, when you put pen to paper paper and you work out all of the details, you can sort of see this. But I say it's like because if you just do the naive thing, and think of your, your 0-1 matrix as a linear map, like as a linear transformation, you could say, okay, you know, should I view this as a matrix over the reals? Or maybe I want to think of 0 and 1 as you know, the finite field with two elements. But if you try to work out the linear algebra and say, oh, formal concepts are eigenvectors, it doesn't work. And you can sort of see why that is. we started the conversation with sets, not vector spaces. So this formal concept story is not a story about linear algebra, i.e., the conversation is not occurring in the world of linear algebra. And so if you have mappings—you know, from sets of objects to sets of features—the kind of structure you want that to preserve is not linearity, because we started with sets. So we weren't talking about linear algebra.
So what is it? It turns out it's a different structure. Maybe for the sake of time, it's not really important what it is, or if you ask me, I'll be happy to tell you. But just knowing there's another kind of structure that you'd like this map to preserve, and under that right sort of context, when you're in the right context, you really do see, oh, wow, these formal concepts are really like eigenvectors or singular vectors in this new context.
Now, anytime you have a recipe, or a template, or a context, but you can just sort of substitute out the ingredients for something else, I mean, there's a bet that category theory is involved. And indeed, that's the case. So it turns out that this mapping, this sort of dual mapping from objects to features, and then going back features to objects, that, it turns out, is an example of adjunction in category theory. So there's a way to view sets as categories. And there's a way to view mappings between them as functors. And an adjunction in category theory is like a linear map and its adjoint, or like a matrix and its transpose. So in category theory, an adjunction is — let me say it this way, in linear algebra, an adjoint is defined by an equation involving an inner product. Linear adjoint, there's a special equation that your map and its adjoint must satisfy. And in category theory, it's very analogous. It's a functor that satisfies an “equation” that looks a lot like the adjoint equation in linear algebra. And so when you unravel all of this, it's almost like Mad Libs, you have, like, this Mad Lib template. And if you erase, you know, the word “matrix” and substitute in the whatever categorical version of that should be, you get the thing in category theory, but if you stick in “matrix,” oh, you get linear algebra. If you erase, you know, eigenvectors, you get formal concepts, or whatever the categorical version of that is, but if you if you have eigenvectors, then that's linear algebra. So it's almost like this mirror world between the linear algebra that we all know and love, and like, Evelyn, you were saying, it's totally concrete. But then if you just swap out some of the words, like you just substitute some of the ingredients in this recipe, then you recover a construction in category theory, and I am not sure if it's well known — I think among the experts in category theory it is — but it's something that I really enjoy thinking about. And so that's why I like SVD.
EL: So I think you may have had the unfortunate effect of me now thinking of category theory as the Mad Libs of math. Category theorists are just going and erasing whatever mathematical structure you had and replacing it with some other one.
KK: That’s what a category is supposed to do, right? I mean, it's this big structure that just captures some big idea that is lurking everywhere. That's really the beautiful thing, and the power, of the whole subject.
TDB: Yeah, and I really like this little Mad Lib exercise in particular, because it's kind of fun to think of singular vectors as analogous to concepts, which could sort of maybe explain why it's so ubiquitous throughout the scientific landscape. Because you have this matrix, and it’s sort of telling you what goes with what. I have these correlations, maybe I organize them into a matrix matrix, I have data and organize it into a matrix. And SVD sort of nicely collects the patterns, or correlations, or concepts in the data that's represented by our matrix. And, I think, Kevin, earlier you were saying how singular values sort of convey the importance of things based on how big they are. And those things, I think, are a little bit like the concepts, maybe. That’s sort of reaching far, but I think it's kind of a funny heuristic that I have in mind.
KK: I mean, the company you work for is very famous for exploiting singular values, right?
TDB: Exactly. Exactly.
KK: Yep. So another fun part of this podcast is we ask our guests to pair their favorite theorem with something. So what pairs well with SVD?
TDB: Okay, great question. I thought a lot about this. But I, like, had this idea and then scratched it off, then I had another idea and scratched it off. So here's what I came up with. Before I tell you what, I want to pair it pair this with, I should say, for background reasons, this, Mad Libs or ingredients-swapping recipe-type thing is a little bit mysterious to me. Because while the linear algebra is analogous to the category theory, the category theory doesn't really subsume the linear algebra. So usually, when you see the same phenomena occurring a bunch of places throughout mathematics, you think, “Oh, there must be some unifying thread. Clearly something is going on. We need some language to tell us why do I keep seeing the same construction reappearing?” And usually category theory lends a hand in that. But in this case, it doesn't. There's no—in other words, it's like I have two identical twins, and yet they don’t, I don’t know, come from the same parents or something.
KK: Separated at the birth or something?
TDB: Yeah. Something like that. Yeah, exactly. They’re, like, separated to birth, but you're like, “Oh, where are their parents? Where were they initially together?” But I don't know that, that hasn't been worked out yet. So it's a little bit mysterious to me. So here it is: I'm going to pair SVD with, okay. You know, those dum-dum lollipops?
KK: Yeah, at the bank.
TDB: Okay. Yeah, exactly. Exactly. Just for listeners, that’s d-u-m, not d-u-m-b. I feel a little bit—anyway. Okay, so the dum-dum lollipops, they have this mystery flavor.
KK: They do.
TDB: Right, which is like, I can't remember, but I think it's wrapped up with a white wrapper with question marks all over it.
EL: Yeah.
TDB: And you're letting it dissolve in your mouth. You're like, well, I don't really know what this is. I think it’s, like, blueberry and watermelon? Or I don't know. Who knows what this is? Okay. So this mystery that I'm struggling to explain is a little bit like my mathematical dum-dum lollipop mystery flavor. So, you know, I like to think of this as a really nice, tasty mathematical treat. But it's shrouded in this wrapper with question marks over it. And I'm not quite really sure what's going on, but boy, is it cool and fun to think about!
EL: I like that. Yeah, it's been a while since I went to the bank with my mom, which was my main source of dum-dum lollipops.
TDB: Same, exactly. That's funny, with my mom as well.
EL: Yeah. That that's just how children obtain dum-dums.
KK: Can you even buy them anywhere? I mean, that’s the only place that they actually exist.
EL: I mean, wherever, bank supply stores, you know, get a big safe, you can get those panic buttons for if there's a bank robber, and you can get dum-dum lollipops. This is what they sell.
TDB: That’s right.
KK: No, it must be possible to get them somewhere else, though. When I was a kid trick-or-treating back in the 70s, you know, there would always be that cheap family on the on the block that would either hand out bubblegum, or dum-dums. Or even worse, candy corn.
EL: I must admit I do enjoy candy corn. It's not unlike eating flavored crayons, but I’m into it. Barely flavored. Basically just “sweet” is the flavor.
KK: That’s right.
EL: Yeah, well, so actually, this raises a question. I have not had a dum-dum in a very long time. And so is the mystery flavor always the same? Or do they just wrap up some normal flavor?
KK: Oh, that’s a good question.
EL: Like, it falls off the assembly line and they wrap it in some other thing. I never paid enough attention. I also targeted the root beers, mostly. So I didn't eat a whole lot of mystery ones because root beer is the best dum-dum.
KK: You and me! I was always for the root beer. Absolutely.
EL: And butterscotch. Yeah.
TDB: Oh, yeah. The butterscotch are good. So Evelyn, I was asking that same question to myself just before we started recording. I did a quick google search. And I think what happens, at least in some cases, like maybe in the past—and also don't quote me on this because I don't work at a dum-dum factory—but I think it was like, oh, when we're making the, I don't know, cherry or butterscotch flavored ones, but then the next in line are going to be root beer or whatever, we’re not going to clean out all of the, you know, whatever. So if people get the transition flavor from one recipe into the other, we’ll just slap on the “mystery.” I don't know, someone should figure this out.
KK: Interesting.
EL: I don't want to find out the answer because I love that answer.
KK: I like that answer too.
EL: I don't want the possibility that it's wrong, I just want to believe in that. That is my Santa Claus.
KK: And of course, now I’m thinking of those standard problems in the differential equations course where you’re, like, you're doing those mixing problems, right? So you've got, you know, cherry or whatever, and then you start to infuse it with the next flavor. And so for a while, there's going to be this stretch of, you know, varying amounts of the two, and then finally, it becomes the next flavor.
TDB: Exactly.
EL: Well, can you quantify, like, what amount and which flavor dominates and some kind of eigenflavor? I'm really reaching here.
TDB: I love that idea.
EL: Yeah. Oh, man. I kind of want to eat dum-dums now. That’s not one of my normal candies that I go to.
TDB: I know, I haven't had them for years, I think.
KK: Yeah, well, we still have the leftover Halloween candy. So this is, we can tell our listeners—What is this today? It's November 19?
EL: 19th, yeah.
KK: Right. So yeah, we bought one bag of candy because we never get very many trick-or-treaters anyway. And this year, we had one small group. And so we bought a bag of mini chocolate bars or whatever. And it's fun. We have a two-story house. We have a balcony on the front of our house. So this group of kids came up and we lowered candy from our balcony down. When I say “we” I mean my wife. I was cooking dinner. But we still have this bag. We're not candy-eaters. But you're right. I'm jonesing for for a dum-dum now. I do need to go to the bank. But I feel a little cheap asking for one.
EL: Yeah. I feel like, you know, maybe 15, 16, is where you kind of start aging out of bank dum-dums.
KK: Yep, yeah. Sort of like trick-or-treating.
EL: Well, anyway, getting back to math. Have we allowed you to say what you wanted to say about the singular value decomposition?
TDB: Yeah. I mean, I could talk for hours about SVD and all the things, but I think for the sake of listeners’ brains, I don't want to cause anyone to implode. I think I shared a lot. Category theory can be tough. So I mean, it appears in lots and lots of places. I originally started thinking of this because it cropped up in my thesis work, my PhD work, which not only involved a mixture of category theory, but linear algebra for, essentially, things in quantum mechanics. And so you actually see these ideas appear in sort of, you know, “real-world” physical scenarios as well. Which is why, again, it was kind of drawing me to this mystery. Like, wow, why does it keep appearing in all of these cool places? What's going on? Maybe category theory has something to say about it. So just a treat for me to think about.
EL: Yeah. And if our listeners want to find out more about you and follow you online or anything, where can they look?
TDB: Yeah, so they can look in a few places. Primarily, my blog mathema. com. I'm also on Twitter, @mathema as well, Facebook and Instagram too.
EL: And what is your book? Please plug your book.
TDB: Thank you. Thank you so much. Right. So I recently co-authored a book. It’s a graduate-level book on point-set topology from the perspective of category theory. So the title of the book is Topology: A Categorical Approach. And so this is really—we had in mind, sorry about this with John Terilla, who was my PhD thesis advisor, and Tyler Bryson, who is also a student of John at CUNY. And we really wrote this for, you know, if you're in a first-semester topology course in your first year of graduate school. So basic topology, but we were kind of thinking, oh, what's a way to introduce category theory that’s sort of gentler than just: “Blah. Here’s a book. Read all about category theory!” We wanted to take something that people were probably already familiar with, like basic point-set. Maybe they learned that in undergrad or maybe from a real analysis course, and saying, “Hey, here's things you already know. Now, we're just going to reframe the thing you already know in sort of a different perspective. And oh, by the way, that perspective is called category theory. Look how great this is.” So giving folks new ways to think and contemplate things they already know, and sort of welcoming them or inviting them into the world of category theory in that way.
KK: Nice.
EL: Yeah. So definitely check that out if you're interested in—the way you said like “Blah, category theory” —he other day, for some reason, I was thinking about the Ice Bucket Challenge from, like, I don't know, five or six years ago, where people poured the ice on their head for ALS research. (You’re also supposed to give money because pouring ice on your head doesn't actually help ALS research.)
TDB: Right.
EL: But yeah, it's like this is an alternative to the Ice Bucket Challenge of category theory.
TDB: That’s right. That's a great way to put it. Exactly.
EL: Yeah. Well, thank you so much for joining us. It was fun.
KK: This was great fun. Yeah.
On this episode, we had the pleasure of talking with Tai-Danae Bradley, a postdoc at X, about the singular value decomposition. Here are some links you might find relevant:

Bradley's website, math3ma.com
Her Twitter, Facebook, and Instagram accounts

The book she co-wrote, Topology: A Categorical Approach

  continue reading

93 قسمت

همه قسمت ها

×
 
Loading …

به Player FM خوش آمدید!

Player FM در سراسر وب را برای یافتن پادکست های با کیفیت اسکن می کند تا همین الان لذت ببرید. این بهترین برنامه ی پادکست است که در اندروید، آیفون و وب کار می کند. ثبت نام کنید تا اشتراک های شما در بین دستگاه های مختلف همگام سازی شود.

 

راهنمای مرجع سریع