Description:
What does AI look like to the taxpayer now that it is here to stay? What does it mean for the entrepreneur? In this episode, W. Russell Neuman joins Tom in exploring the positive side of AI, how AI can co-exist with humans, and the many ways it can propel us forward just as technology was always intended.
Order Tom’s book, “The Win-Win Wealth Strategy: 7 Investments the Government Will Pay You to Make” at: https://winwinwealthstrategy.com/
Looking for more on W. Russell Neuman?
Books: “Evolutionary Intelligence: How Technology Will Make Us Smarter”
“The Digital Difference: Media Technology and the Theory of Communication Effects”
SHOW NOTES:
00:00 – Intro
02:20 – What is AI?
07:40 – How will AI help us with decision making?
14:31 – Can AI and blockchain work together?
20:23 – Regulating AI – should it and can it be done?
25:30 – Using AI within the IRS
Transcript
Speaker 1:
This is the Wealth Ability Show with Tom Wheelwright. Way more money, way less taxes.
Tom Wheelwright:
So probably the biggest topic in the business world today in investing world is AI. And there’s a lot of controversy over AI. Elon Musk is very concerned about controlling AI. Congress is looking at regulating AI, which is challenging since they can’t even regulate cryptocurrency. The question is, is AI going to hurt us or is it going to help us? Is Siri going to kill us or is Siri a solution and a step forward?
And I love the positive side of looking at technology and what it can do for us. And we have the expert, Russ Neuman, who’s the professor of Media Technology, make sure I get this right, Media Technology at NYU. And just so excited to have you on our show today, Russ, because as we were talking earlier, AI actually is going to play a big role in the tax field and a big role in business. So if you could just give us a little of your background, because you’ve spent some time watching this whole thing evolve and would love to get some of your perspectives.
- Russell Neuman:
Okay. I have a PhD in the social sciences and sociology from UC Berkeley. I spent most of my career at MIT, at the MIT Media Lab. And I’ve been co-teaching with engineers now for the last 20 years. And as a result after I’ve been at Penn, Michigan and now at NYU, I’ve become the technology guy. And so my focus is on not just how the technology works. So I think it’s important that we understand that and I’m happy to talk a little bit about how the guts of these AI systems are designed, it’s what’s the social impact, social, economic, cultural impact of these technologies. So that’s my specialty.
Tom Wheelwright:
I love that. So let’s start with some of the basics. So if you would tell us what, just real briefly, what is AI and what isn’t AI?
- Russell Neuman:
Okay. The term got invented in the mid 1950s in a hopeful mood. And over the last 70 years there’s been AI winters, periods when we felt we’ll never get it all to work. And people’s perspective on all this has changed in the last year as a result of GPT354 and ChatGPT. And it turns out that a number of the major players, including Google and others and Meta were sitting on a very large language model and felt pressed to move forward when ChatGPT caused all that trouble.
Artificial intelligence basically refers to a decision system that’s following a set of rules. And if you think, and I have a way of trying to humanize something that’s very hard to understand, if you think about when you’re typing in your cell phone or your computer finishes the sentences, I’d like to meet you at eight and it goes, “o’clock tonight.”
And when there’s a set of rules, and it turns out that finishing a sentence is fairly easy because most of the kinds of things we’re typing with our thumbs, have very predictable sentence structure. And so finish the spelling and finish the sentence, it’s very easy. What happened was they started extending the length of the prediction so that if you start a sentence and now not only finish the sentence, but finish the paragraph and finish the entire essay. And to do that, they had to make a much, much larger model. If I said how to finish your sentence, maybe it has 10 words that it thinks are likely and it picks the most likely word out of 10. The number of parameters in these modern models is in the neighborhood of a hundred billion different parameters. And that’s why when they put these large systems to work, they can’t predict which of the billions that could possibly be activated by a particular prompt are going to be activated.
So if you think about the human brain, there are 90 billion, roughly, neurons in the human brain. So we’ve got models estimating the future and trying to predict it and understanding the past based on an experience of reading trillions of texts mostly from the internet that generate predictions about the future, which we take to be almost human-like in their prescience.
Tom Wheelwright:
Interesting. So you’re very much a proponent, as I understand it, of AI and the positive effects on how we make decisions with how AI might help us make decisions. Can you kind of walk us through that because there’s so much fear. I know you wrote it is your quote I started with, “Is Siri going to kill us?” And so let’s move it to the positive side.I wrote a book called Tax-Free Wealth, which is the positive side of tax. So how do you take the positive side of AI and how will it affect our decision making process?
- Russell Neuman:
Okay, let me start by trying to be respectful and say certainly want to be cautious and careful. And the senior executives and engineers that are working in this area have expressed concerns and cautions. Many of my colleagues at MIT have become signers to that document, said maybe we should pause for six months. I think they knew it was unlikely that that was actually going to happen. But by making something as concrete as saying, “Maybe we should wait six months and see if we’ve got this all together and have dealt with the risks of misalignment between what these systems are doing and what we want them to do.”
So I’m respectful of those concerns, but I think they’re based on a fundamental misconception, which is to project human values and experience. And humans were evolved from competition for scarce resources. That’s not how these systems were set up. So the notion that they’re going to want to kill all of us in order to take the resources we have is kind of a classic anthropomorphic projection onto these systems.
The classic question we’re paying attention to is, if they get really smart, how can we protect ourselves if they are “smarter” than we are? And the answer is we exploit AI systems on our own side and have them inspect and make sure that these complicated systems, only another AI system could probably monitor a working AI system. So we can put the AI systems under our control to protect us.
Tom Wheelwright:
Using AI to monitor other AI because [inaudible 00:07:24] like you say, it is a computer, it’s not a person. And so we’re actually telling it what to do in the first place. Is that fair? And so that does make some sense. When you talk about decision-making. So how do you see AI in the decision-making process? I think that’s very important. Our audience is primarily entrepreneurs and investors. How do you see AI helping us in that decision-making process?
- Russell Neuman:
Okay. If you think about what Consumer Reports does, it tries to figure out the quality of different products and it’s got maybe, I don’t know, 10 different dimensions of outdoor grill. And one of them is the price and the shininess and how fast it heats up. And what happens is they sort of figure out what they think is the most important and they say, “Well then we recommend this particular grill to you.” What these systems are getting more and more responsive to is your own particular interests and getting a shiny grill and one that starts up real fast isn’t of your concern. So it weights the different values. So here’s where AI systems, when they’re practically accessible, and part of my book tries to address this question of, how will we communicate? And I start out with the notion of a little Siri character sitting on your shoulder watching the world as you watch it and whispering into your ear.
The issue there is the computers, you said it’s just a computer, but these computers are getting closer and close. They used to be big rooms than they were on our desktop, then laptop, then in our palm, and it’s pretty soon they’re going to be in our glasses and in wearables. And ultimately I think in contact lenses where communicating with a big system, it’s not a box anymore, it’s a network. And we’ll be able to access the network through all kinds of audio, visual, tactile, and ultimately direct to brain connections.
So what it’s going to do is it’s going to say, “All right, you got to make a decision here. Here are five considerations. You tell me which ones are most important and let’s walk through that.” So it helps to clarify our values, the outcomes we desire the most. And then instead of telling us what to do, it helps us evaluate the options.
Tom Wheelwright:
So are you suggesting it’ll help us ask better questions?
- Russell Neuman:
You put it very well.
Tom Wheelwright:
Because I have long believed that the number one role of an advisor is to ask better questions and the better questions we ask, for example, in my business, the question I get asked all the time is, is something deductible? Is this mug deductible? And I said, “Well, a better question would be how do I make this mug deductible?” And so it sounds like to me that what AI can do is kind of search the universe basically of questions and come up with, “Well, here’s some better questions to ask.” Is that a fair assessment?
- Russell Neuman:
It is indeed.
Tom Wheelwright:
Okay. So let’s take this into the financial world, free markets, et cetera. But let’s stick with actual real, in other words, day-to-day stuff. So we’re trying to make a decision on an investment in a multifamily housing syndication. So this is a developer that is buying a 200 unit apartment complex. Just so we can get a little concrete here and we’re trying to do our due diligence on it. How do you think that AI might help us with our due diligence?
- Russell Neuman:
I’m going to go back to that question and say, “Well, it’s only a computer,” and say, “Well, think of it not as a computer, but part of a very elaborate system.” So basically the classic measure of assessing value is comparables. So typically we can hold two or three comparables in our head. AI systems can hold a thousand comparables in their head and weight the difference and try different senses of which dimensions of comparability would be the best instead of giving you a recommendation, say, here are five or 10 different scenarios of how the value of this would succeed and whether the chances of a second wave of COVID would influence the investment. And would give a very nuanced and describe the conditionalities that the investor might want to consider in making the investment.
Tom Wheelwright:
Interesting. Now let’s expand to a little more global viewpoint. Right now we’ve got the Federal Reserve trying to combat inflation with interest rates, which I think they’ve got it wrong. But let’s say that you’re the Federal Reserve chairman or you’re the Federal Reserve. How would you use AI to predict, because it’s got some predictive abilities, right? Because it’s looking at history, it’s looking at consequences, it’s looked at what’s happened in the past. How would you go about using AI to predict what might happen with another interest rate rise, for example?
- Russell Neuman:
Well, but now we’re in the domain of macroeconomics, and these fellows have been doing modeling and estimating for years. So they’ve got, I don’t know if you want to call it AI or not. They’ve got very complicated models and they can work the different conditionals to see how that affects the model. So my guess is that community is already working with very complicated multi-variate models to make those predictions. So in that case, I think it would be nothing new. The idea that something is complicated as a staff of 20 quants working for a particular investment firm or for the Fed becomes something that the average individual and the average investor can put to work.
Tom Wheelwright:
Hey, if you like financial education the way I do, you’re going to love Buck Joffrey’s podcast. Buck’s a friend of mine, he’s a client of mine, he’s a former board certified surgeon and he’s turned into a real estate professional. So he has this podcast that is geared towards high paid professionals, that’s who he’s geared towards. So if you’re a high paid professional, you’re going, “Look, I’d like to do something different with my money than what I’m doing. I’d like to get financially educated, I’d like to take control of my money and my life and my taxes.” I would love to recommend Buck Joffrey’s podcast, which is called Wealth Formula Podcast with Buck Joffrey. I hope you join Buck on this adventure of a lifetime.
- Russell Neuman:
Oh, I like that idea. I like that idea. I’d love to take Wall Street out of it. So speaking of that, I did an interview not too long ago with a fellow by the name of Alex Tapscott and he was talking about Web3.0 and blockchain. He’s always talking about blockchain.
Tom Wheelwright:
He is and I-
- Russell Neuman:
Tapscott’s middle name is blockchain.
Tom Wheelwright:
… I think so-
- Russell Neuman:
Fun to hear him talk.
Tom Wheelwright:
I got to say I love blockchain because it’s really, when you break it down, it’s triple entry accounting. So it’s just an accounting system is what blockchain really ends up being. How do you see, and he’s talking about Web3.0 and how blockchain, it’ll be the internet of ownership, and how do you see AI and blockchain? This is a question I’ve had for the last couple of years. How will AI and blockchain work together?
- Russell Neuman:
I’m going to give an answer that you rarely get in this kind of a context, which is, I don’t know. Everybody else seems to have an opinion on these. My perception is that the advantages and weaknesses of blockchain data storage and verification technologies are quite independent from the kind of decision processing strengths that AI has. So if anything, I think the two are complimentary and that AI, I don’t think there’s any technical or architectural requirement that a blockchain model would be needed for an AI system to be very successful and vice versa.
Tom Wheelwright:
One of the concerns, and this is actually one of Alex’s, one of the things he mentioned as a concern, is that if you start with an incorrect assumption in blockchain, it just perpetuates itself forever because you got to be right in the first place. And so my question is how will… Because now it’s distributed, it’s not at a central location, it’s distributed out there, it’s auditing itself, and so will AI, what I’m wondering is one of the things I’ve noticed with these AI note takers is I get the notes back and I’m going, “That’s not what I said.” I’m just going, “I’m sorry, I don’t like them,” because I’m going, “they actually misstate what I said in the conversation.” And I’m going, so they actually to me, create an inaccuracy and then we assume it’s correct.
It’s like, remember when the internet first came out, we said, “Well, if it’s on the web, it must be true.” And now it’s like, “Well, if it’s AI, it must be right. Look at all of this computer compubility and the ability for it to do this.” So how do you look at this from an, because to me a lot of it’s an accuracy issue. Will it be accurate or could it just be taking a lot of inaccurate things and combining them together, creating something else that’s inaccurate?
- Russell Neuman:
Okay, I’m going to respond to two elements of your query. The first is the notion that a blockchain will multiply inaccuracies. My sense is that what blockchains are good at is correcting inaccuracies. So if I have a certain value of cryptocurrency and that’s been documented in the blockchain and somebody else wants to steal my investment and writes in one element of the blockchain that it belongs to him instead of me, then all the others vote and it says, “Well, that’s only one vote, and there’s 30, 50 other records out there that corrects it.” So the blockchain is a good model for correcting by-
Tom Wheelwright:
But let’s say that you didn’t own it in the first place, but in the blockchain you owned it. That’s where you got a problem because… So here’s the example used because I’m asking him, because to me, one of the most obvious uses for blockchain is title, is actually recording title on property. And let’s get rid of these title companies who get to charge over and over and over for really doing nothing but what they’ve done once and they get to charge for it a hundred times.
Every time you refinance your house, you get a title insurance charge, which makes no sense to me. And his comment was, if I’m understanding it right, was, “Well, yes, but you do have to make sure that that was right in the first place because you put out in the blockchain, it’s like there and it’s not coming back.” So I think that’s the kind of accuracy issue we’re talking about. And with AI, where AI is really looking at the entire universe here, if we will, or the universe of what it has access to, will it find those inaccuracies? Will AI actually be a tool to find the inaccuracies?
- Russell Neuman:
Well, you can predict, given my generally positive and hopeful view of AI that I think it will get a lot better. So Tom, when you get concerned about AI making mistakes, first you want to use the cute word that we’ve invented, calling them hallucinations. Makes it a little less threatening when you say, “Well, it has little hallucinations.” And the second thing is to say, this is a toddler AI, we’re talking about the Model T here.
Tom Wheelwright:
For sure.
- Russell Neuman:
Run around in front and crank the engine, then come back and put the spark back so that it works and put goggles on because we don’t have windshields that are very good yet. So this is the Model T of AI and the capacity for self-correction is very strong. And I think I would be surprised if the first editions of AI didn’t hallucinate a little bit, but I think you’ll see increasing levels of accuracy.
And if you think about, well take Wikipedia for example, they did, anybody can write anything they want in Wikipedia pretty much, but they’ve got a self-monitoring system that works there. And studies have said that the accuracy of Wikipedia is about equivalent if, and often better than a traditional encyclopedia with all the authorities writing the articles like Britannica.
Tom Wheelwright:
That’s interesting. So, I’m going to ask you the question that we started with. Should AI be regulated or can it be regulated?
- Russell Neuman:
No, and no. I understand the impulse for something that looks very powerful and looks like it could be an issue to be dealt with. And so they don’t look like they’re behind the times. The members of Congress announced that they’re monitoring these things closely and examining what’s going on and interviewing all the senior engineers and executives and NGO folks that are concerned about possible bad effects. My argument is that AI is basically math, applied math, and you can’t regulate math. My argument is most of what AI does is to speak and prior restraint of speech is something that I think is not a promising way to move. And third, I said the concerns that motivate interest in generating a federal artificial intelligence commission. By the way, if you do that acronym, it comes out pronounced fake.
Tom Wheelwright:
I like that.
- Russell Neuman:
That what you’re going to confront is, well, if somebody has used AI to commit a crime, to rob a bank or used AI to violate someone’s privacy or used AI to libel someone, we already have laws in each of those three areas. So my term of art for approach to regulation is to say, regulate downstream. You don’t arrest the car company that created the car that was used by the bank robber. You go after the crime and the perpetrator, not the tools that were used.
Tom Wheelwright:
I like that. Okay. So I want to turn to speaking of regulating or using AI. And where I have a concern is this is where AI is, the IRS has announced recently that AI will be used to catch people who are supposedly underpaying tax and they will go after them. What’s your view of that? Do you have a view of that?
- Russell Neuman:
The issue with AI is that it can say, here are a couple of, by studying when a documented false filing with the IRS documented that it was in fact incorrect and illegal and wrong. And we say, “What are the other traits of that filer that might be a cue that we should say we need to pay attention to these sorts of people?” That’s something, as you know, well, the IRS has been doing all along, it’s been looking for patterns of associated so that they can use some other queues and say, “Let’s pay a little bit more attention to this subgroup of filers that are more likely and have more options for misrepresenting their income.”
So I think it’s not fundamentally new. It’s applying a set of principles. And obviously I’d be concerned if the IRS threw you in jail or generated all kinds of problems based on some attributes you have without actually having documented that some element of your filing was demonstrably incorrect.
Tom Wheelwright:
So if I’m understanding you, identifying people to audit, not a problem. Assessing people without auditing them would be a problem.
- Russell Neuman:
Yeah.
Tom Wheelwright:
Does that make sense?
- Russell Neuman:
Yeah. And red-blooded American, the fewer people they audit, the better.
Tom Wheelwright:
Well, here here’s some of the challenges. Of course, what they’re trying to do is they want to match up. And so from that standpoint, if they can do more matching and better matching, and the computers can help them do that. So for example, right now, if you get a 1099 and you don’t report that exactly the same on your tax return, the IRS is going to match that. What they can’t do right now is match like a K-1 from your business or from your investment, they can’t match that up. But presumably then with the new technology, they might be able to do that, that’s kind of the idea behind it.
The concern, of course, is that the IRS has a newfound propensity to say, “We don’t like something and therefore we are going to disallow it, even whether it’s legal or not.” And then you have to support it in court, which could cost you a million dollars, which basically is the ultimate sledgehammer, right?
And so that’s the kind, I think when I look at the potential for, I’m not even going to call it abuse, although a lot of people would call it abuse, but the potential for using this for really nefarious purposes. Do you think that that is real? Or do you think that somehow it will self-regulate?
- Russell Neuman:
All right, Tom, when you were talking about IRS is trying to find when an audit is justified or not, if we can encourage the IRS to use cautiously, carefully, thoughtfully new decision systems to prevent false positives, that’s when you mentioned your 1099, but there was some error in the IRS’s version of the same 1099 that you’ve got. You were doing everything you were supposed to, and they said, “No, no, no, we want to drag you into an audit because there was a slightly different identification number somewhere on the 1099.” So if they can fix that problem, that would be great. That’s solving against false positives. Now, Tom, you and I are going to think of a really good idea, and then you’re going to carry it to the IRS and here’s our new idea. Are you ready?
Tom Wheelwright:
I love it.
- Russell Neuman:
What the IRS does is it says, “We are going to pursue this hammer, and we’re going to take this to court because we’re going to disallow it ’cause we don’t like it because, well, let’s see, what’s our primary motivation as the IRS? Ah, it’s increasing income to the government.”
Our idea is Tom and Russ come up with this idea that they should have another thing, which is the cost of the harassment cost, the burden to the typical taxpayer that’s generated by that same decision or audit system and say, we want to both maximize income to the government, fair enough, and minimize the unnecessary hassle paperwork, et cetera, that is incurred upon both business and individual filers. So if we can come up with and find a real good consultancy that can work with the IRS with its extra resources for hiring consultants now, let’s come up with a second measure of inconvenience, harassment, difficulty and cost, legal data processing costs to the individual filer and see if they can then measure a ratio of possible benefit for increased income to the government-
Tom Wheelwright:
And I love that
- Russell Neuman:
… against the cost of the hassle to the individual taxpayer.
Tom Wheelwright:
I love that because the definition of tax, and we’ll wrap up pretty quick here, but the definition of tax is a drag, right? You’re taxing something, so you’re putting a drag on it. And so if you tax more of something, you actually get less of it because you’re putting a burden on it. And so I think that those models would be great if we started using those models to say, “Okay, what tax actually works? If you’re talking about, for example, income inequality, well, what tax works to help with income inequality? Is it the income tax or would we be better off with an estate tax or with a consumption, a different kind of consumption tax, a value add tax?”
I think that is a very positive view and I love where you’re going with that, Russ. I’m all in. I’m all in. Well, absolutely. We need to put this together. We’ll start with the Office of Management and Budget. They might listen a little earlier than the IRS might, but I think it’s terrific to see the positives of this.
- Russell Neuman:
The only thing that’s missing is my book, Evolutionary Intelligence, which should be there on that bookshelf behind you with all those other books prominently pictured.
Tom Wheelwright:
Absolutely. Well, it will be soon. Absolutely.
- Russell Neuman:
I like that. I’m in agreement there.
Tom Wheelwright:
I totally love it. So again, the book is Evolutionary Intelligence, How Technology Will Make Us Smarter. It’s absolutely been terrific having this conversation with you, Russ. I love what you’re doing. I love the positive aspect of what you’re saying and that, look, we may see some negatives, but then let’s do the positive side. So if there is a negative, let’s say the IRS does something we find negative, well, let’s find a way to help them make it positive. And I love that idea. I know that when we do that, we’ll always make way more money and pay way less tax. Thanks, Russ.
Speaker 1:
You’ve been listening to The Wealth Ability Show with Tom Wheelwright. Way more money, way less taxes. To learn more, go to wealthability.com.