Mehrab Momin from aiCTO
00:00:00:01 - 00:00:20:12
Unknown
Transform your startup journey with the energy tech nexus. Connect with fellow founders. Access critical resources and be part of a community shaping the future of energy and carbon tech. Your path to building a Thunder Lizard starts here. Learn more at Energy Tech nexus.com. Welcome back to the show. It's been a while days. Yes, it has, but we are back.
00:00:20:13 - 00:00:46:22
Unknown
And today we're really excited to have a really special guest with us. Today we have Merab Momen, who is your startup's fractional EIC CTO. So he helps a lot of startups with, AI and has lots of experience working with startups. So he had two of his own startups, that he founded and worked on technology startups, and then he's worked with many other startups himself.
00:00:47:03 - 00:01:09:19
Unknown
In Houston, also spent some time in San Diego. You mentioned. So thank you for coming in, being here with us today. Thank you. Thank you for having me. So tell us a little bit about how, you got to this role of starting your company, where you act as a fractional AI CTO for startups. Yeah. So as you mentioned, I've worked with multiple startups, being a founder myself.
00:01:09:21 - 00:01:41:10
Unknown
And actually I also did a bit better service provider before. So between that, I felt this gap where especially early stage founders, both technical and non-technical, sometimes really need this partner, which generally in the startup world we call co-founders, but they are not ready to have that co-founder level relationship either because of resources or they don't know if it's the right person.
00:01:41:12 - 00:02:14:00
Unknown
And on the service side, I always work with project based work, which felt very, very transactional. I really needed something which is much more engaging where I can give that partner level advice to the founders. So basically, I am that co-founder for hire, which you don't have to make long term commitment and also there are no drama and eventually you can do transition out, which I hired them to get to a full time CTO as and when that need arise.
00:02:14:00 - 00:02:29:16
Unknown
And so basically I felt that was important. So that's just one aspect of it. The second aspect was I had been working with AI for a long time. So I didn't want to build just any product. But at the same time, AI on its own is not gonna do the thing. You have to build the full application around it.
00:02:29:21 - 00:02:52:03
Unknown
So I had that experience. I had experience, I say, how do I package it together and do all of this together? And that combined to make, this, new role which I created for myself, that let's build a fractional AI, cto practice. I'm slightly ADHD, which helps that way. I'm actually working on multiple with multiple people on multiple things.
00:02:52:04 - 00:03:12:23
Unknown
So that helps, as well in the fractional role. And you happened to stumble about on to AI quite early on in your career. So talk to us about that journey. And you know, from where you started and where we are today, where AI is all everyone wants to talk about. All right. Yeah. So actually AI is pretty old, right?
00:03:12:23 - 00:03:54:07
Unknown
So it's, started in the neural net initial neural network started in 1950. So very, very old. So in my college and final year, 1996, I did my first AI project and it went nowhere. Project went. It worked, but it was not practical because processing power was not there. But in 2005, I was working for this company Early Kid, and they had some researchers from, Boston who from a company around something called swarm intelligence, and that that allows you to do supply chain, and scheduling kind of, optimization through AI.
00:03:54:07 - 00:04:23:20
Unknown
And it's always build on nature. And I ended up working on that project. Lots of things I did around that project is now called data engineering, data pipelining, MLOps. None of these terminologies exist. I do not knew how big deal it was until in 2007, National Geographic did an article on swarm intelligence, and our project was one of the projects they use as an example, where my manager got interviewed and I was like, okay, I was working on something cutting edge.
00:04:23:20 - 00:04:48:05
Unknown
So it's still it's an interesting topic. But that was very, very early. And then later on, I actually worked, on my own startup on computer vision. This links back to my college days, and, build up in AI camera, a computer vision AI camera, had a patent around it as well. We were too early and some of the technology, but but it was a good experience.
00:04:48:07 - 00:05:12:01
Unknown
Can you, for I think a lot of our audience were we're kind of energy heavy. We probably know about technology. I think I kind of gets bandied about a lot for a lot of different technologies. And I'm sure when you start your career at, about one thing, swarm intelligence is a different thing. Can you kind of take us through, kind of what the different flavors of air and, and how that's different today with which AI.
00:05:12:03 - 00:05:34:22
Unknown
Yeah. So I think that that's the whole the history of AI is actually how the energy industry has been adopting it for a long time as well, and many others. So I early I was much more simpler, where, you, you started having simple like, models, statistical models. And that's where most of the machine learning models started coming in.
00:05:34:23 - 00:05:54:19
Unknown
And I think some of this was restrained by the development in AI itself. But later on, you had deep learning models because you could have deep neural networks started developing. And what does that mean when you say deep neural network versus, statistical model? Yeah. So machine learning model is static model because most of the CIA is still based on statistics.
00:05:54:19 - 00:06:23:00
Unknown
But yeah, you you are doing different statistical formula on your machine learning model. And then there is the next thing which came up is neural network. So within those new neuron that's artificial neuron. Is is where you still use a statistical formula to process it. But then you form a node, and that new neural network itself is, is a game changer on how that, the entire, machine learning process works.
00:06:23:00 - 00:06:49:02
Unknown
So for machine learning, lots of your, statistical models you're building are much more human driven. That what model will look like, in neural network, then you are actually relying on the kind of self-learning model that is getting developed. And then when it goes much more deeper, when computer vision on these kind of the of you add many, many layers to it, that becomes a deep neural network.
00:06:49:04 - 00:07:12:19
Unknown
So even the genuine AI today is, is, is a form and, interesting architecture on top of it. So that's more things got added. So the, the journey I'm talking to you about, about this ML models to deep learning model practically got used in industry is in last 2025 years. Even though there are other neural networks and other things were getting used.
00:07:12:21 - 00:07:36:18
Unknown
But in 2014, then there's this new, paper was written by Google. Attention is what you need, I think 2014 or 17. Sorry, I may be missing the date, but, because there were some embedding models came in earlier which were also basis. So maybe that was 2014 and 2017 was attention. And then, based on that, the attention is all you need.
00:07:36:18 - 00:08:02:02
Unknown
That's an architectural change within the, model that allowed for natural language processing. So that allows the meaning on the in the language to actually got translated, in or codified in the math, which allowed this model to develop. And that became the basis for JNI. So, so that's the journey that people were meant to do. Interesting.
00:08:02:04 - 00:08:24:00
Unknown
So it's a lot of layers. So, in some ways we, I think talk about AI applications. And it's a lot of the same underlying technology, but not the form or the application really, matters. Right. So I think the way I would describe it is building blocks. Each of these are different building blocks. And they got configured in differently.
00:08:24:02 - 00:08:45:02
Unknown
And that that gave a new type of building blocks, got introduced within that. And that, that is how we are evolving on it. Interesting. Yeah, I think that's interesting because I think you go everywhere today and everyone's talking about AI, but what they're really only talking about are like the natural processing models, the LMS, the ChatGPT that people are actually using on a daily basis.
00:08:45:02 - 00:09:10:19
Unknown
But there's like so much at the back of it, and there's so much more to AI than just those, ChatGPT chat boxes that we use, right? And that we can build on to make automated workflows. And we're talking about a genetic AI right now. And like machine learning in industrial, context have had also a great, impact.
00:09:10:23 - 00:09:40:17
Unknown
What have you seen kind of how different types of these building blocks have been applied in different industries? So, so the way I actually get is it's as, as I said, like history of AI itself. So earlier part, I didn't mention there was another startup I work on, file control. We used OCR. This is back in 2002, 2002, time frame and OCR was a very early computer vision, natural language, GUI kind of model.
00:09:40:19 - 00:09:58:14
Unknown
Right. So that was that early like people started doing that. And then the when I mentioned what sometimes was what was a neural network. It was it has a very specific way of configuring, things to, to work. But so machine learning started coming in and I think back in the 80s, like a 20 tens.
00:09:58:14 - 00:10:29:05
Unknown
And around those time, Google created, certain libraries, which actually really, really accelerated the use in, in, in, of machine learning and model. And most of the machine learning model initially came, came from tabular data. So that that was happening at the same time natural language processing was also happening. And you may have seen some Google, Google Translate and another type of OCR or some language understanding, but was not very good.
00:10:29:05 - 00:10:57:21
Unknown
And I think that 2014 time frame, when I said the embedding model started coming in, suddenly this whole, your Google Translate, if you started seeing, started translating much, much better. So so that was the evolution on the natural language side. And same way those, ML models started becoming like better as well. Actually, I want to just clarify that AI ml models and deep learning model, it kind of interchangeably used a whole lot of time.
00:10:57:21 - 00:11:22:19
Unknown
I differentiate it like that. Even though there is a definition in the industry. So somebody may even call a deep learning model as an ML model. Nothing wrong. I think once you are actually implementing it, the devil is in the details. Yeah. So I just want to make sure that I clarify. People may refer to it slightly differently as well, but for me, anything which is statistical based models or animal models, anything that neural network based model or deep learning models.
00:11:23:01 - 00:11:42:07
Unknown
Okay. Interesting. Yeah. And like you're saying, because they're used interchangeably. Interchangeably like people have different schemas that they use and like what is ML for them and what is, natural language such as machine learning like neural elements and machine learning. If you have I'm going to use it just in natural language, but that is an industry term and syntax.
00:11:42:07 - 00:12:17:06
Unknown
If, if you talk to any AI, ML or AI engineer, they would specifically differentiate what is what it means, where as a at a risk of a sidebar, one of the things that's most fascinating to me about the the rise of the natural language models was that, the a lot of the gen AI we see today, it comes from, like you said, that, that, Google Translate and that effect of trying to translate and, and really, they were trying to figure out how to go from, like, Spanish to English and, there's just like, all this magic kind of capability falls out of this attention and, and almost as, like half
00:12:17:06 - 00:12:35:07
Unknown
the translation system because, I think in the original translation system, they had to take in text. They got to figure out what it means, and then they got to spit out text and, I don't I don't think they had the any intention to make, kind of the modern ChatGPT systems. They have today. The intention was to translate.
00:12:35:09 - 00:12:57:03
Unknown
And it's just amazing to me how all of this kind of goes back to this very practical problem. And it it kind of speaks to how, let's just the nature research, sometimes you don't know where things will take you. And the, the kind of derivative industry that's come out of this is unimaginable. But I'm sure you're living it more than you know.
00:12:57:07 - 00:13:16:17
Unknown
I'm just looking, discovering it from the from the past is very, very interesting thought. Actually, it got me to a very different place here because you mentioned about language translation. We don't think about it, but languages are very, very important. Human language is a very, very important thing in human evolution because this is how you communicate knowledge.
00:13:16:17 - 00:13:50:20
Unknown
This is how all the work is done. And there is lots of this hidden in languages. So the attention that model, when they did it for translation, what it unlocked, it unlocked the underlying meaning and context on those translation. And that's what became ChatGPT and everything. So, so it even though it was not intended, but it actually happened because that's the, really attribute of language itself, which came fully, open.
00:13:50:22 - 00:14:14:03
Unknown
And we have all used this world large language model. So that large is actually very meaningful because the context it became know entire internet, level data as a context and that's why it got so much more meaning out of it. That's why so much more happens in, in, in, in this AI world. But with lamps. Right.
00:14:14:06 - 00:14:41:23
Unknown
And China is bigger because it's not just dependent on language, but the same concept then gets applied to other meaningful context. So between languages, visual language models, think about context, like if there is a picture of three of us and this logo and the stuff, there is a context that and it could be a language plus human that there are three people that are sitting in a podcast because they are all sitting in front of this kind of microphone.
00:14:41:23 - 00:15:02:04
Unknown
Right. So there is a context. It comes from picture. Similar context happens from books, because if you read the first paragraph of the book and then you go a few pages later and you read something which is not mentioned, but you now have a context, what you started reading, it's going to be related. Same thing it get. It keeps on going and going and going.
00:15:02:04 - 00:15:24:23
Unknown
Right. So that whole idea of attention is context within, to something that is more related. Right? I you made me think, is it going to still be true that a picture is worth a thousand words in the future? It's probably worth more than it might be worth if you give it to ChatGPT. Yeah, but, and this is maybe, like, a basic or maybe.
00:15:24:23 - 00:15:50:06
Unknown
Sounds like a silly question, but I do just my curiosity because people used to talk about, like, semantic models and like how that was going to really be what changes, you know, the game when it comes to like what we're doing today with GPT, like how are semantics being like, how is that different? Like is Milton? Yeah, that's I think we have to take some context, right?
00:15:50:06 - 00:16:18:23
Unknown
Like somebody saying semantics is going to be different within what context they are talking about. Right? So I think that that is going to be more important. So because to me semantics is still is still a language unless I'm unaware of, where, where this thing is going. But, I think it's, it's all so I think maybe in terms of what I do on the coding side of the coding models, people are talking about, I think that that's that's one way of looking at it.
00:16:18:23 - 00:16:39:19
Unknown
So even in coding like languages, when when people are using languages to create programs are still using complicated program, that is a meaning to it, right? Yeah. There's certain instructions to it and then sell them now with the coding emphasis. So generally lamps are also doing coding. And there their specific coding models as well. So they, they have very good semantic understanding.
00:16:39:19 - 00:17:10:09
Unknown
Right. Because there, there is a structure to a programing language. And so what can I unlocked is translate that instruction into programing and then ability to recreate that. The gen part, the generative part. So first is understanding part in the journey. And second thing is generation part. Right. So you now you are actually using that in terms of coding models, if you look at it to semantic means, how you are going to translate that whole thing together.
00:17:10:09 - 00:17:38:09
Unknown
So, I think, that that's unlocking, your existing knowledge of how languages were built or how certain, knowledge base has been created, and now use that semantic to, to bring meaning out of it, where model itself is not doing anything, it's just translating into the instruction set that is given. And then you are actually and that's where the agent work.
00:17:38:11 - 00:17:58:00
Unknown
Most of the genetic work comes out into being. Right. Because what what we're model is doing, it's creating that code of how this agent is going to run. And then you get the meaning out of it. So I guess to, to bring it back to energy in the application layer, you know, we're seeing a lot of innovation.
00:17:58:01 - 00:18:20:05
Unknown
I think with, like a genetic I coming out of the valley coming to, programing and coding and, I did a little world tour of, like, looking at how people are, are doing, implementing AI, within energy. And I think one of them is like here at, at collide. They're, they're putting workflows together, for the energy industry.
00:18:20:11 - 00:18:42:12
Unknown
The only other, other applications I'm starting to see are around helping people with, like, carbon accounting or just accounting in general. And, and I'm curious if you've seen, interesting applications for these, new volumes that, are pushing the envelope, here for the energy industry. So when you say we have them or regular labs, too.
00:18:42:12 - 00:19:06:01
Unknown
Yeah, yeah, I think I know your work. You previously worked on a vehicle. So. Yeah. You know, so because BLM and BLM is two different produces, which are both valid. Yeah. So we it's much more how you, it's like virtualizing the lemon. It's, it's related to inference which everybody's using for cost and, and how you gonna run efficiently and everything and via is the, vision language model.
00:19:06:02 - 00:19:27:16
Unknown
Okay, I think I misheard you okay. Yeah, yeah. So, the it happens, actually. Yeah. Yeah, I it is not the first time having having this conversation. And it it's easy to mess. Right. But it's very close terminologies. But I think BLM is it is an important thing. But before I go into BLM, actually we should talk about there is something called CNN.
00:19:27:18 - 00:19:52:12
Unknown
Okay. The CNN is an older thing. Okay. All the computer vision previously and even now lots of it is based on CNN. Okay. Even when you are using your, facial recognition to open your phone, all of this is, on the CNN, which is convolution convolutional neural networks. And, because it uses convolution layer to get your features and everything is I like the math problem in convolution.
00:19:52:12 - 00:20:12:20
Unknown
Is that where that comes from or something else that convolution layers. That is math and lots of math and lottery and everything is involved in it. So you basically that there you for an image of finding basic features out of it. And those features are then that next translated into some meaning and then higher level object and higher level object and that's how most of the computer vision is getting trained.
00:20:12:20 - 00:20:35:05
Unknown
And that's where you don't create your own computer vision model from scratch. You always pick something which is the closest, and then put your own data and then get the next level of meaning, right. So basically an initial could be your lines and edges and all of those things and color detection, most of that, which is not color detection but edges and blur and all that stuff.
00:20:35:05 - 00:20:53:11
Unknown
And then eventually you create objects and simple examples could be, okay, you can start identifying cats. Next level, you say, I don't care about just cats. I really want to see if it's a tiger or a lion or a house cat or something else. And then you can go further and further to. But you have to have data.
00:20:53:13 - 00:21:26:10
Unknown
So with that context set, what we did. Now you can take that data similar all that information and that that got trained into your, language models. So now you mixed your vision with your language, meaning. And when the language model got trained, it it you could have done straight image and it could you can label it with, some language that, hey, this is Cat and this is not and that it will get trained.
00:21:26:12 - 00:21:53:02
Unknown
Which happened initially and that's it still happens in lots of things because not everything has got gotten to a finer detail. But if you get, you will able to get, like, much more deeper meaning, because if you had that, underlying convolutional layer data to, to train with it, so, so that that actually really helps it what it unlocks, it unlocks now real meaning with pictures.
00:21:53:04 - 00:22:29:12
Unknown
And you can actually do lots of real world, physical world work. So you can actually do scene analysis, which typically, CNN model, you have to specifically train for that particular scene here now because it can mix and match just like large language model from lots of data. So, so it can do those sophisticated things, when vision, when which model comes in different flavors that are like image vision and that with native video vision, most of the image vision could do your video as well because each video is frame of images.
00:22:29:14 - 00:22:47:15
Unknown
But, you know, if you have to do all of that at that level, then you are doing more processing, producing more processing, and it becomes more complex. So so lots of this limitation on vision right now, even though it unlocks, is is on the size of data and size of model and processing and everything. So so there are lots of optimization happening, lots of people.
00:22:47:15 - 00:23:11:17
Unknown
And this is one of areas of interest for me, that, that, that I really, explore a whole lot is how do you stack different models. So because people and one and everybody is trying to go for this one model which does everything, but I think people are realizing sooner or later you get into limitations. So sometimes you may just get into different meanings.
00:23:11:18 - 00:23:29:19
Unknown
A very good model, which picks up a picture and a person and of this, and then you take that and give that to the VM. As a further data, just like your prompt, you're adding more things to your prompt and getting more pictures and more knowledge to your realm. And I think you'll get much better output from that.
00:23:29:19 - 00:23:50:14
Unknown
Well, and so again, and another basic question, but like all these models that, you know, we, we use like where do you get who's creating these models, where do you get them from. Yeah. And why do they do it for free. Yeah. It's not not so this is happening in the research. It area. And then companies are also realizing that they they they need to.
00:23:50:16 - 00:24:19:15
Unknown
Yeah. So so think about your Linux like hey anybody created open source Linux and yet industry benefits. There is commercial application. I think same thing is happening on model world as well. And the example I gave was CNN where people are building on top of each others. That's actually a very good, value proposition for us as a whole world that if it's more open source, more data is, available, especially on the a basic foundational level, then more people can build on top of it.
00:24:19:15 - 00:24:41:06
Unknown
And then and if they publish it back then, the community benefits. And then you go to the next level of, enhancements on it. So, so anyone can build a model, anyone. But typically researchers are building in universities. But anyone can I have built some, but, for a vision language, a large language model it require is very, very resource intensive.
00:24:41:06 - 00:25:11:03
Unknown
So individuals are not. That's where all this foundational model investments are going in. Very, very everything's happening. But there are efforts to simplify that. So like Mira Mira from the former CTO of OpenAI who started this company called Thinking Machine, they just released a product called Tinker, which allows you to fine tune lamps. It's currently for researchers, and they take care of all the complexity of training.
00:25:11:03 - 00:25:38:14
Unknown
Right. So so we are going in that direction, on not an individual level, but still if you there are lots of small language models where individuals can tweak, where then the, the family is called fine tuning. So people can put your, their own data on top of Slurm. So if you have proprietary data and you don't want to really, you give it, let's say it's medical data, like, there are regulatory requirements or there are other compliance requirements.
00:25:38:16 - 00:26:01:06
Unknown
Another proprietary data, which is very good. And you want to use that. There are multiple ways to do it. One is to do fine tuning. Right. So, so and slot is one of the libraries people use to, fine tune those small language models. Doesn't require it. Still, it is still resource intensive, but it's and a smaller organization can do that.
00:26:01:06 - 00:26:21:13
Unknown
Right. And even individuals with some resources can do that. So so that that's a possibility. Should you do that? I think that's another question. And, you know, when we when we think about AI, tech companies, tech startups in general, we think about the Bay area, you know, that that's where a lot of the development happens.
00:26:21:13 - 00:26:50:20
Unknown
And, a lot of the learnings come from because there's such a huge concentration of, computer scientists, engineers in that, in that area. How have you find kind of operating, as an AI expert in Houston? Like, what do you think is our level of maturity when it comes to AI as a city? I would actually, say that we are not too far behind.
00:26:50:22 - 00:27:16:17
Unknown
And the reason is Houston's thriving energy industry do do put lots of effort in cutting edge technology. The problem we have in energy industry that we work on physical thing, large infrastructure thing on the industrial side, and they last too many years. So if you're going to look from the lens of what you see out there, energy industry is done.
00:27:16:19 - 00:27:40:08
Unknown
You will not see the results compared to Bay area, which has lots of consumer heavy, where the results are immediate. So same level of technological work may be happening. And I did mention to you the 2005 an energy process industry. We're working on the swarm intelligence model. Right. And I'm sure many other energy companies were doing that. They have adopted ML and everything very early.
00:27:40:10 - 00:28:00:18
Unknown
The thing is, their results don't show up as fast. Doesn't mean that work is not happening. So just wanted to, do that. And maybe that's the reason people may concentrate for Bay area, but that doesn't mean that a whole lot is not happening. So so there is a whole lot of positive in Houston, which shows up a little bit later in public domain.
00:28:00:23 - 00:28:33:00
Unknown
Yeah. And I feel like the the other thing that happens here in Houston and I don't know if it's cultural, but a lot of companies, that are large try to keep that knowhow in-house. And probably because the control of data so tightly coupled with, the technical expertise and I think a classic example would be, just image processing around subsurface data, you don't necessarily want to like if you're an oil company, you want to put that out there and you kind of want to have that competency to to know what it means in-house.
00:28:33:02 - 00:28:50:16
Unknown
And so they, they have the resources to build it in inside if they feel it valuable. And that never becomes a startup, basically. Yep. That's true. Yeah. And you know, and I know there's a lot of talk about like, yeah, so far a lot of the AI, progress has been like in the digital sphere, right.
00:28:50:16 - 00:29:20:12
Unknown
Like now when we're when you talk about Houston and you're talking about industries, it's really the physical sphere and how you're really bringing, you know, the digital part of it. Like you're bringing your the vision, the, you know, the cameras and like a 3D like. Yeah. How how are you seeing like in it actually in action from like, you know, ten, 20 years ago when you were in industry to like what have you seen some real examples of it being applied.
00:29:20:16 - 00:29:58:15
Unknown
Yeah. So I have a very good example. I was talking to this one solar financier. He was mentioning that whenever they put investment, or like a finance, a solar project, and especially on the residential projects or even commercial projects with small commercial projects, one of the things they want to look for and they looking for the solution is, and this is like few years back to three years ago is, there are trees around your solar, now, you can take satellite images.
00:29:58:20 - 00:30:25:03
Unknown
You can take drone footage before you actually finance that project of the property with computer vision, detect what kind of trees they are, what are their their maturity in certain number of years, how much they grow, will they impact the sunlight to the solar. And based on that, they can actually and calculate the ROI of the project. And that's that.
00:30:25:04 - 00:31:00:07
Unknown
That's like a very important, very, very practical energy use case for, sorry, solar. Use of AI in physical world. Yeah. And I guess it's like also robotics. Right. All manufacturing now is like just really advanced robotics. And a lot of it is I know to that is what is going forward. Looks like going forward is the, the whole idea of physical AI with robotics and industrial space.
00:31:00:07 - 00:31:21:19
Unknown
And we have this company person here in Houston building humanoid. Talk to us about them. Actually, we should bring them on our podcast next. Company. Yeah. Next company. Yeah, yeah. So they are building humanoid industrial robots, and they are building in Houston because we have all this industrial infrastructure needs it. Are you ready to lead the decarbonization charge?
00:31:21:19 - 00:31:48:05
Unknown
Energy Tech Nexus is your platform for growth, offering unique resources and expertise for energy and carbon tech founders. Join us at energy Tech nexus.com and start building your Thunder Lizard. I am looking for that time when I can get my personal I made. Who can clean after me, do the dishes, all of that stuff? Yeah, so there are lots of companies working on humanoid robots and that's where NEC wanted to differentiate.
00:31:48:05 - 00:32:17:12
Unknown
And that's the Houston advantage, right? There are lots of industrial use for, humanoid robots. And why? Why that? And why not just build some automation? Because there are lots of industrial tasks that are very, very specific to, your, like, which require human intelligence. But there are small tasks, like you don't want to do tooling and machining for it, and that's where you need humanoid robots to switch from one task janitors task, or a combination of task to accomplish something.
00:32:17:14 - 00:32:37:21
Unknown
Where are they called? Humanoid robots? Because they function like human. You will have the hand and everything. And they look like humans. They like made to represent more. Not just more like human. It's more how they're functioning. Like human as well. Like, I like having a hand to hand and taking. Yeah, yeah, picking up a small tool, using the tool to do another thing.
00:32:37:21 - 00:33:01:10
Unknown
Right. So, so lots of these things comes from, like, human physiology, how human physiology works. So same way. And then, you can actually train on human action. That's another good part about that. Right. Because you can see how humans are working. Use that as a training data. Another use of computer vision on the reverse, where you're generating training data to train for robots to work in a certain way.
00:33:01:10 - 00:33:21:04
Unknown
And then they are using computer vision to actually recognizing and doing that work. Is that a lot enabled? Because like just the computation is there today where it's practical or what's what's changed in the last few years? So yeah, computation definitely is one big thing. And that if you have if you have to go back, I think in modern industrial IoT that that enabled lots of things happening.
00:33:21:04 - 00:33:46:15
Unknown
Okay in there. So this is like next level evolution of that. Because why did IoT happen? IoT happen because it, we were able to miniaturize processing and we were able to put communication on those miniaturized stuff, to create IoT devices. But most of the IoT industrial customers that we were talking about, it was doing it for a long time.
00:33:46:17 - 00:34:11:00
Unknown
It used to call industrial IoT. And that's where I was working for many, many years. Part of all of these swarm intelligence for using all IoT data. Same thing happening in AI side. Now you have processing, which is much bigger on the edge. So that edge processing is what's enabling things to happen at the, at away from data centers.
00:34:11:00 - 00:34:40:20
Unknown
Right. Otherwise, if you think of. Yeah, everybody is thinking of large data centers, you can't put large data center on the robot, but you can do edge processing. And there are processors, that are coming up, but they are already there in the market. Like Nvidia has this whole Jetson, which is an edge thing, and that they're building a whole line of other processors, for edge computing and those edge computing device like robotics is one one where you could use case or growth use case for Nvidia as well.
00:34:40:22 - 00:35:01:20
Unknown
And that's where and long term, like the, the difference between like, large language models or large AI models and, and using those to scale down and train smaller models like becomes valuable because you're really trying to push things more to the edge. Yeah. Right. That's true. And then also had the thing I talked about is like, you are going to have a stack of models, right?
00:35:01:20 - 00:35:26:00
Unknown
So you you don't need to process everything together. You process what you need at that moment. And then also you have communication. So you can anything that is larger and you still need to process. You can give it back to the Arrigo Cloud data center and then just get the input. So depending on how going to design it based on your need, but but the idea is to have it independent of cloud most of the time.
00:35:26:02 - 00:35:58:17
Unknown
But you don't have to if you are working in a warehouse, you have communication. You have, cloud processing, you can do cloud processing, but you have your near real time actions need to happen on the on on the device itself. As an AI expert, are you ever worried about AI going rogue on us? No network? Yeah, I, I, I do believe that we will, people will misuse it, which has happened with any technology, but AI on its own.
00:35:58:17 - 00:36:19:10
Unknown
Going rogue on us is gonna be still a human action. It's not going to be AI action. Yeah. So I will relate a bit different question. I, I, I've learned that in programing you have these things called like unit tests where you like make sure if an input comes in, like the right output comes out and that works when, the technology like is is not statistical, right.
00:36:19:10 - 00:36:38:10
Unknown
Like you have a very expected outcome. What is there a unit test for like these modern models or how to how do people think about that? Like how do I know what's going in is is generating what I want to. Yeah. So about yeah you're right because this is all probabilistic. Right. So it's it's not deterministic but there are ways to make it deterministic.
00:36:38:10 - 00:36:57:21
Unknown
Correct. And that there are many techniques. So one of the techniques people use is like using two different, smarter model and not so smart model, and then take output and evaluate from the other and then give a yes no output on the first one. And that becomes your unit test. Right. So, so the model like a validation validation okay.
00:36:57:21 - 00:37:18:03
Unknown
So the output and then say is it a good thing or not. And that's how it's still probabilistic. But you can you're changing in a way into a deterministic. Interesting. So that's how I keep it from going bro. You got to make sure you validate says as you're working with. Yeah. As you're working with startups like what kind of advice.
00:37:18:05 - 00:37:45:14
Unknown
One what kind of issues they're coming to you for that they need help with and or like maybe something that they like fundamentally get wrong often that you see, with startups. So, I think it's a variety of issues. One is, it depends on who's coming. Right. So if if the founders, not technical, doesn't understand AI, and they use this buzzword, I think about a year and a half ago, until about a year ago or so, I used to hear a whole lot.
00:37:45:14 - 00:38:05:05
Unknown
Or I want to train my own alarm. I'm like, do you have $75 million? Of course. Estimated at the time by of meta, doing its training. And then so it's a demystifying that like clearing that up that, hey, what do you really need? Do you not do you need to use you are you using are you applied here?
00:38:05:06 - 00:38:27:08
Unknown
So that was very basic. And then the people who knew that and then they say, hey, I want to fine tune versus rag. So they, they don't get that clarity. But overall in the whole thing picture, everybody got wrong a whole lot more, even though they knew is they don't have enough proprietary data to work with. So the thing is you they think they have something unique.
00:38:27:10 - 00:38:54:14
Unknown
It's not unique. Unless you have, your own unique data to it or you have a very unique way of looking at it. So, so data was, was one of the biggest thing that it comes in because either there's not enough data to process it or if they have data, the use case is not big enough for you to provide.
00:38:54:14 - 00:39:30:09
Unknown
So they will say, hey, I need to fine tune this, but it's not cost effective for them to do that, right? Maybe a rag is a good solution. And there is an other side as well. There's some somewhere where people say that. So let's say if you have a large amount of data, which is not changing, and you do a whole lot of inference on it, a fine tuning on a SLM, maybe a better way of looking at it versus doing a rag, because rag gets you started much faster.
00:39:30:11 - 00:39:50:00
Unknown
But, the cost is higher. But if you are not doing lots of inference, then it's very good. And typically that's what's happening. Most of the use cases are like that. So so I think data and how to process it I think many people go wrong. And then some people just are they don't know says applied AI or they're a solution.
00:39:50:00 - 00:40:18:10
Unknown
They can use it. They do just want to have their own AI component where they can just buy build versus buy decision. They they go get wrong, they do not unlock the real value of their business. They may have it it may not be in building that AI, but actually applying somebody else's AI in their business. So so how do you think we should be thinking in the future about AI and like how it can be applied and impact the way that we're working?
00:40:18:12 - 00:40:43:14
Unknown
So not just founder, I'll give it for all the, human beings, whoever is interacting with the AI, I think people and I'm gonna, inject the AGI discussion in it as, as part of this cancer is we talked about this whole language and meaning and context in training. Right. This is all rational intelligence.
00:40:43:16 - 00:41:12:21
Unknown
And I get is getting very, very good at it. It's getting better and better. It's getting better than humans much, much, much faster. And it will keep on growing. It will keep on happening for a long time. We you won't be able to compete as a human on the rational intelligence level, but as a human, we have our intelligence, have more planes, we have this emotional intelligence plane, we have a spiritual intelligence plane.
00:41:12:21 - 00:41:54:18
Unknown
And I'm not talking about the religion part. I'm talking about your gut, your feeling, everybody who has this internal thing right. Your rational intelligence within AI is mainly dealing with your existing knowledge that has been fed into it. It is allowing you to discover new knowledge. It is allowing you to combine existing knowledge to present in a different ways, but it will never create new knowledge, because human new knowledge comes from the combination of these additional planes of intelligence that we have.
00:41:54:20 - 00:42:20:15
Unknown
If we understand that mastered that, and many people do, and it has been happening for centuries, and it's keep on growing. That's how the human race we are growing in our intelligence and our capabilities. We will be able to unlock with AI and actually become its master and have it serve us that other way around. And I think that's what I would would want everybody to understand.
00:42:20:17 - 00:42:42:02
Unknown
So as a founder as well, same thing that your human intelligence is much, much higher than AI. Yes. Use it for rational and it's going to get better just like from it. And I give this example of you were in agriculture age, as human beings. And then suddenly industrialization started happening. Machines came in, agriculture didn't went away.
00:42:42:02 - 00:43:04:10
Unknown
Human race didn't went away. We just moved into different kind of task. There is no way human power can match what industrial equipment can do with space. Same thing will happen here and we will unlock our next level of plane. That's where we, unlocked informational technology, where even within the human thing, which is your current way, what everybody's doing and everybody's worried about.
00:43:04:12 - 00:43:36:22
Unknown
So if you look through this lens, you will unlock lots of new possibilities for innovation. But also how you going to use AI for your own good and humanity's good. So. So that's the lens. I actually try to give it to anybody who's asking the founder or any person who is worried about AI, you know, so anyone on our team knows, when I have when I think a lot, it's like when I sit in the shower, I have like downtime and it's like, I enjoy like, it's like a sauna experience for me, but it's like, yes, I started, like, disassociate.
00:43:36:22 - 00:43:56:20
Unknown
Right? And I'm, I'm just thinking about different things. And that's when, like that Eureka moments kind of play against each other. And it's like half of them don't make any sense, right? It's just the brain just trying to, like, work, work out connections that maybe aren't there. But it's, kind of the irrational ideas emerge.
00:43:56:22 - 00:44:15:01
Unknown
And that's that's right. Yeah. My best ideas is, is literally just taking a shower. No, it. Yeah, it's a chance to process things that aren't rational necessarily. Yep. So that's exactly what is spiritual intention. And and there was like about 20 plus years ago, I read this book from Dana Zohar called Spiritual Intelligence that you just like we have IQ and EC.
00:44:15:06 - 00:44:34:22
Unknown
She defined it as skew. And that exact girls eureka moments, the things that you but you call gut intuition. All of these things where it's coming from. Right. You get a download but there's very little data that justifies it. It's like a feel. But think, think, think about this way that we see. We say we don't know what might happen, how it's happening.
00:44:34:22 - 00:45:06:06
Unknown
Like tick tock about any human knowledge today or being around. And I don't want flattered. I mean, you know, this Gaussian, that knowledge, it was a fact existed. It's a universal knowledge. It came in human consciousness, consciousness when somebody thought and then and eventually somebody absurd. Right? Yeah. So, so, so same thing is going is happening in, in just this thought process of Eureka and the stuff we don't understand as a human what is happening internally.
00:45:06:08 - 00:45:33:20
Unknown
That itself is a huge opportunity for research and figuring things out. We have this whole new human like neuroscience, which itself is a huge area for so many people are worried jobs are going over. Yes, you will have to retrain to look for new things. I cannot tell what tomorrow is going to be, but by just the way, industrialization reduced the need for labor for the output.
00:45:33:22 - 00:45:52:17
Unknown
Same way the AI will reduce the need for labor for all the information jobs, so it will unlock something. I don't know what it is going to be. I think it's about human ingenuity, which is going to play out and will unlock that. And that's like it plays to creativity, right? Like that's what they say when you're wandering, when you're going for a walk.
00:45:52:17 - 00:46:19:06
Unknown
Like for me, it was a lot like when I was in Norway, I would be sitting on the bus is when like random ideas would like stick together. And then you're like, you have this like, oh my God, this is the solution. I need to do this. And, and and that's what you know. So when I'm thinking about like the next generation who's going to school or like people early in their career, like what kind of skills should they be focusing on to be set up for this world where you don't need to, like, learn all the formulas.
00:46:19:06 - 00:46:44:23
Unknown
You don't need that rationality as much. How are you going to solve problems? Just like calculator, you use calculator and you don't. You do. You don't have to, you're solving bigger problems, right? And you're still using calculators. Same way you you lose use your the rational intelligence provided by. But think about it. Even then you have to provide new knowledge for that to grow the AI to grow.
00:46:45:00 - 00:47:13:07
Unknown
Yeah. It's not going to grow without humans. It's not going to progress. For yeah, it may codify everything we know as of as of today. Let's say it's all the processing power is given to AI. It can codify everything. Right now it has not even codified that. And I think earlier part we talked about languages. Right. Given there are things across languages are they able to not understand because they're there are meanings to it, that that are much deeper in a one cultural context which are not codified, they're not written.
00:47:13:09 - 00:47:34:07
Unknown
There are many oral cultures, and it exists in this world, even from the community. I mean, it's very oral. That is just a high oral history of my forefathers, which everybody talks about. It talks between their cousin, and we ask anybody, did you ever write it? No, nobody wrote it. Okay. I doesn't know that information, but it's also the wisdom that built from that.
00:47:34:07 - 00:47:52:04
Unknown
The human wisdom is not not come in there right. But even if that get codified today, everything is still it's existing experience, any new experience still, we are the humans who are going to feed into AI, right? Yeah. One of the things I happened last night that was interesting as we were just chatting, we had a little team dinner.
00:47:52:06 - 00:48:13:18
Unknown
I think at some point, like five different languages got wiped out. And like four of them were from across, like the Indian subcontinent. And I think those are a question of how much did different people understand because like, in many ways, like between Urdu and Hindi and as a Punjabi, you were saying, like, this is like a lot of lone words, but they just mean different things.
00:48:13:19 - 00:48:39:12
Unknown
But that was fun. I don't know what we're going to do with that information, but, it reminded me when when I was touring through Asia, there's there's a very significant concern about, like losing the language context, as a form of history, because in some ways, because a lot of the language models are getting built on English, English as our language, lingua franca today, I guess, and a lot of the models get built that way.
00:48:39:17 - 00:49:17:06
Unknown
You aren't necessarily. Yes, you can translate, but they're not necessarily being trained on like localized knowledge and localized semantics. And so I think what we were talking about specifically, India, like the the words just mean different things in different regions. And that might be hyperlocal. And so, when I was in Singapore, there was a lot of interest in saying, well, we need to capture Singlish and make sure we have a version of this that is Singlish, and both so we can process, like, everything locally because they're worried about, data sovereignty, but also just, being able to preserve that kind of history.
00:49:17:09 - 00:49:38:12
Unknown
And I think, one of the other participants around the table was talking about preserving Taiwanese, which, is a tonal language, and the pinyin doesn't necessarily like capture the nine different tones. And so it was, interesting conversation about there is a lot that doesn't necessarily hasn't yet made it into the little written translation that we're seeing.
00:49:38:14 - 00:49:58:01
Unknown
The large language models, there's a lot of opportunity still. Yeah. So I can bring this whole thing back to climate tech. Okay, good. We can do that. Perfect. I think we're going up, but within the same context. Right. The thing is, industrialization allowed us lots of the cities, but it also created lots of problems, including some of the climate tech problem.
00:49:58:03 - 00:50:36:12
Unknown
Humans were dealing with lots of these problems before industrialization of heat and how to keep cool and lower things. Now there are like architectural things in different parts of worlds that have been built to do cooling and stuff. Right, which are very, very sustainable. All of this coming of knowledge from different cultures and languages needs to be preserved and understood, because that's where you can get new innovations, because maybe an it could be again, the community, like I may be good at combination, but if you don't feed that data, it's not gonna get you there.
00:50:36:12 - 00:50:58:00
Unknown
Right? So coming from like what you learned in us and what in sub-Saharan Africa makes the problem together and it may be preserved in this different languages and cultural context that that that can come together and you may solve a climate tech problem in a very different way than the way you're thinking from Norm today. Sciences. Yeah.
00:50:58:00 - 00:51:19:08
Unknown
No that's great. And it's it, you know, especially with energy, it's a global industry. Climate is a global challenge. And there are a lot of solutions that are kind of hidden out there that aren't necessarily just here in Houston, Texas. And so that's, that's something we're going to be working on. And what are you excited about?
00:51:19:08 - 00:51:38:17
Unknown
What you're seeing in Houston? You know, when the startups come up to you, what they're building, you know, what's your sense of, like, what's coming for us in the next couple of years? Yeah. So I think one thing is, internet has done that and it has accelerated. It is, I think what's happening in Houston, which is happening somewhere else, is not very different now.
00:51:38:19 - 00:52:00:16
Unknown
So very interesting things are happening like a travel tech company is, is like a completely computer kind can see my company would be coming out of Houston right. We just talked about humanoid robotics coming out of Houston. There are, fintech solutions coming out of Houston. So there are many things coming out of Houston which are beyond, energy we are known for.
00:52:00:21 - 00:52:27:18
Unknown
But I'm going to talk about the most important part, which many people like do we have like four big things in Houston energy, healthtech, space and logistics, and a fix bigger thing, which we people don't realize is we Euston has a very large, small to medium size business economy, which is very diversified. And as we talked about different ideas coming together.
00:52:27:18 - 00:52:53:19
Unknown
And that's actually starting to happen in Houston. Right? So we we we are actually learning from each other's industry solutions. And that's actually advantage in Houston. Interesting to to to bring it together. Right. Because and I can give you one simple example and I think it's one of your prior podcast person, he I listened to who talked about vision tech, safety of worker safety.
00:52:53:19 - 00:53:21:23
Unknown
Right. And he comes from vision model on human action. Right. We do have like lots of retailers and stuff. They do have detection with computer vision, right now. It's also based on human action. And now two different industry same solutions come together. And they you you will you will have a very good solution for Houston. So to me I think it what is I see that happening slowly where people are learning from one industry in applying another.
00:53:22:04 - 00:53:47:17
Unknown
If we can enable that, and especially for folks like you can can make that happen, I think we will see an accelerated new innovation coming out of Houston, which is not new for Houston in its own strength of industries, but also learning from each other and bringing something completely new. Interesting. Yeah. How can people get in touch with you.
00:53:47:20 - 00:54:08:13
Unknown
You're usually you're located at the iron. Yeah. In Houston. Yeah. I have office out of iron. And you can reach me at on my website. I kto dot services, and you can always connect with me on LinkedIn as well and send me a message there. Ben. Awesome. Yeah. Thanks for being with us. Thank you for having me.
00:54:08:13 - 00:54:10:03
Unknown
It was very enjoyable talking.