M&T Bank Q4 2023 Earnings Call Transcript

There are 7 speakers on the call.

Operator

Greetings and thank you for standing by. Welcome to the GSI Technologies 4th Quarter and Fiscal 2023 Results Conference Call. At this time, all participants are in a listen only mode. Later, we will conduct a question and answer session. At that time, we will provide instructions for those interested in entering the queue for the Q and A.

Operator

Before we begin today's call, the company has requested that I read the following Safe Harbor statement. The matters discussed in this conference call may include forward looking statements regarding future events and the future performance of GSI Technology that involve risks and uncertainties that could cause actual results to differ materially from those anticipated. These risks and uncertainties are described in the company's Form 10 ks filed with the Securities and Exchange Commission. Additionally, I have been asked to advise that this conference call is being recorded today, May 16, 2023, at the request of GSI Technology. Hosting the call today is Lileen Xu, the company's Chairman, President and Chief Executive Officer.

Operator

With him are Douglas Shirley, Chief Financial Officer and Didier Lasserre, Vice President of Sales. I would now like to turn the conference over to Mr. Xu. Please go ahead.

Speaker 1

Good day, Everyone and welcome to our fiscal 4th quarter and full year 2023 financial results earning call. The 2023 fiscal year was filled with many positive developments, new partnership and progress towards achieving our goals. We also experienced Setback and unforeseen delays on several fronts with APU. We learned a lot during the year About the addressable market, Kimiya Wang can reasonably pursue with our team, given our limited resource. However, we recently have made significant strides in leveraging third party resources to help identify users, Resellers and OEMs, these results are proving valuable in helping us identify opportunities for capturing revenue and increasing awareness of the APU's tremendous capabilities.

Speaker 1

We have also sharpened our focus for JEMEA 1 to leverage our resources and prioritize near term opportunities such as synthetic aperture radar, OSAR In satellite, where we have a superior solution, we understand these markets and know whom We can support and help with our offering. Another focused application for JEMEA 1 is vector search engines, where our APU plug in has demonstrated enhanced performance. To this end, we have dedicated more resources and prioritize the target customers that have expressed interest in leveraging our solution. Our data science team has been busy working on the SaaS search project with 1 leading provider And we plan to pivot to other players in the space once we have met our deliverables with the first owner. Looking ahead on our roadmap, we will view upon the work we have we are doing today In future APU versions to address large language model or LLM for natural language processing, Vector search engines are a fundamental part of chat gbt architecture and essentially function plus memory for chat gbt.

Speaker 1

Large language models use deep neural networks, such as transformers to learn billions or trillions of words This is another reason that FactSet search is an appropriate focused application with the APU. Additionally, we are improving our search and AI SaaS platform to support our go to market Strategy for search. We intend to use this tool to develop more potential partnership Like OpenAI Plucking, integration that we recently launched and with other open source Decentralized search engines that use machine learning algorithms and vector search engines The increasing size and complexity of enterprise datasets and the proliferation of AI in all aspects of business As I've been driving growth in these search engines, encouraged by the positive reception of our APU parking By several key players, we are optimistic about generating modest revenue from this market in the fiscal year 2024. For both of the GMLI-one focused applications, as we just mentioned, SaaS and the fast vector search, we have set specific revenue goals that we aim to achieve this fiscal year. Our Yale patent competitor stack has brought progress In the past quarter, our AirPilot's stack is designed to offer Pizance development advantage, while delivering high performance without compromising either.

Speaker 1

All of our current focus application do not require a competitor. We have a beta version in use currently and are on track to release the production ready version later this year. Your Python will demystify The APU for Andy Tyson, OC developer, I'm excited to announce that. We are on track to complete the tape off for Gemini 2 by this summer and evaluate the 1st silicon chip At the end of the calendar year 2023, we aim to bring this solution to market in the second half of twenty twenty four. Gemini 2's design will provide significant performance enhancement with reduced power consumption and efficiency.

Speaker 1

This feature will expand the future addressable market for the APU to larger markets such as edge application, Fast fixtures, LON and Advanced Driver Assistance System or ADAS, The last one being the vertical we will go after with a strategic partner rather than directory. Gimli2 is built with the TSMC 60 nanometer process. The chip contains 6 megabytes of associative memory Connected to anti megabytes distributed SRAM with 45 terabytes Per second bandwidth, or 15 times the memory bandwidth of the state of the art parallel processor for AI. This is more than 4 times the processing power and 8 times of memory density compared to GEMINRA 1. The GEMINI APU is still with the bit processing, which allows fully flexible data format application And Halo advantage versus other payroll processors.

Speaker 1

Gimli2 is a complete package that include a DDR4 controller, an external interface for PCIe Gen4 by searching and PCIe Gen4 by phone. This integration solution allows Gemini 2 to be used in affordable edge applications, while still providing significant processing capability. In simpler terms, GeminiT2 combines different components together, allowing to be used in less expensive devices, We are still being powerful enough to handle demanding tasks at edge of our network. To another way, Gemini2 brings data center capability to the edge. This means that computationally intense application can be done locally.

Speaker 1

For example, ADAS, delivery zone, autonomous rover and the UAV for unmanned aerial vehicle and satellites. Another application for Gemini 2 could be IoT Edge applications and a critical infrastructure will process requiring a reliable and efficient operation. For example, wind bound to mitigate failure mode that can lead to significant financial losses or operational disruptions. GMLI2's combination of high processing power, large building memory with tremendous bandwidth End low cost solution provides a best in class solution for AI applications like fast vector search, a growing market driven by the proliferation for Gemini 2's in memory foot header, which is a basic building product to allow Gemini 2 to perform high processing power. We are thrilled to announce that we are currently in very early stage discussion with the top Cloud service provider to explore how GeminiTol's foundational architecture could deliver performance advantage.

Speaker 1

Just this year, we have seen the disruptive impact of large language models that understand and generate human like language Like Cheggivity, Microsoft's BIN and Google's BAW. As the boundary of the language processing continue To be pushed, we envision abundant opportunity in this market for Gemini 2 and future version of the APU. We believe that we have merely scratched the surface of the potential of large language models and the transformative impact they can have across Large Language Models' attention memory requires very large memory, A very large memory bandwidth on chip, the state of the art GPU solution that built in 3 d memory To address the high capacity memory requirement, PAC has poor memory bandwidth for adequate memory assets. The limitation is going to get worse, large language models are progressing. Gemini chip architecture Has inherently large memory bandwidth.

Speaker 1

It is natural migration to add 3 d memory for the next generation Gemini chip To address large memory requirement, this substantial improvement potentially translate into orders of magnitude better performance. As a result, we could be strongly positioned to compete effectively in the rapidly expanding AI market, Staying ahead of the industry leading competitors, our resources and team are focused on applications where we have a high probability of generate revenue to capitalize on Gemini 1's capabilities. As we bring Gemini 2 to market, we will be more experienced and approaching target customers and creating new revenue streams. We are formulating our roadmap for the APU, which holds tremendous potential. With the future versions, the APU has the capability to cater to much Large markets and the potential opportunity are quite promising.

Speaker 1

In parallel with our Board of Directors, we are actively Excluding various options to create shareholder value, I remain fully committed to drive sustained growth and innovation In the year ahead, thank you for your support and for joining us today. We look forward to updating you on our progress in the coming quarters. Now I will hand the call over to Didi, who will discuss our business performance further. Please go ahead, Didi.

Speaker 2

Thank you, Lilleen. As Lilleen stated, we have sharpened our focus on a few near term APU revenue opportunities. In addition, we have strengthened our team with a top data science contractor whose primary job is to accelerate the development of our plug in solution for the high performance search engine platforms that Lyleen mentioned. We have also begun working with a company that offers custom embedded AI solutions for high speed computing using Gemini 1 and Gemini 2. Another critical development to improve our market access for the APU has been adding distributors.

Speaker 2

We are pleased to announce that we have added a new distributor for our radiation hard and tolerant SRAM, but also our hardened APU for the European market. In addition to our partnerships and focus on near term opportunities, we plan to build a platform to enable us to pursue licensing opportunities. This is in the very early stages and we have work to do before we formally approach potential strategic partners. That said, we have a few preliminary I'm sorry, we have had a few preliminary conversations on determining what is required to integrate Gemini technology into another platform. This would allow us to identify the specific performance benefits for our partners' applications to ensure effective communication of the problem we solve in their system or solution.

Speaker 2

We recently demoed the Gemini 1 for a private Company specializing in SAR satellite technology. They provide high resolution earth observation imagery to government and commercial customers for disaster response, infrastructure monitoring and national security applications. The satellites are designed to provide flexible on demand imaging capabilities that customers can access worldwide. They recently provided the datasets to collect I'm sorry, to conduct comparison benchmarks on the GeminiOne and we are commencing the process of running those benchmarks. SAAR is one market we anticipate that we can generate modest revenue with GeminiOne this fiscal year.

Speaker 2

GSI was recently awarded a Phase 1 Small Business Innovation Research, also known as SBIR. SBIR is a United States government program that supports small business R and D projects that could be commercialized for specific government needs. For this contract, we will collaborate with the Air and Space Force to address the problem of edge computing in space with Gemini 1. Gemini 1 is already radiation tolerant, making it particularly well suited for Space Force Missions. This contract is a milestone for GSI Technology as it will showcase the APU's capabilities for the military and other government agencies and provide great references for similar applications.

Speaker 2

We have submitted other proposals for a direct to Phase 2 project and other SBIR proposals are in the pipeline. On that note, we received verbal confirmation just this morning that we have been awarded a research and development contract, which could be worth up to $1,250,000 to integrate GSI's next generation Gemini 2 for Air and Space Force Mission Applications. This revenue will be recognized as milestones are achieved and a typical timeframe is 18 months to 2 years. Once this agreement I'm sorry, once the agreement has been finalized and executed, we will issue a press release with full details. Let me switch now to the customer and product breakdown for the Q4.

Speaker 2

In the Q4 of fiscal 2023, sales to Nokia were $1,200,000 or 21.8 percent of net revenues compared to $2,000,000 or 23.1 percent of Net revenues in the same period a year ago and $1,300,000 or 20 percent of net revenues in the prior quarter. Military Defense sales were 44.2 percent of 4th quarter shipments compared to 22.3% of shipments in the comparable period a year ago 26.2 percent of shipments in the prior quarter. SigmaQuad sales were 46.3% of 4th quarter shipments compared to 47.6% in the Q4 of fiscal 202245.2% in the prior quarter. I'd now like to hand the call over to Doug. Go ahead, Doug.

Speaker 1

Thank you, Didier. I will

Speaker 3

start with the Q4 results summary, followed by a review of the full year fiscal 2023 results. GSI reported a net loss of $4,000,000 or $0.16 per diluted share On net revenues of $5,400,000 for the Q4 of fiscal 2023 compared to net loss of 3,000,000 or $0.12 per diluted share, and net revenues of $8,700,000 for the Q4 of fiscal 2022 and a net loss of 4,800,000 or $0.20 per diluted share and net revenues of $6,400,000 for the Q3 of fiscal 2023. Gross margin was 55.9 percent in the Q4 of fiscal 2023 compared to 58.6% in the prior year period at 57.5% in the preceding Q3. The decrease in gross margin in the Q4 of 2023 was primarily due to the effect of lower revenue on the fixed costs and our cost of goods sold. Total operating expenses in the Q4 of fiscal 2023 were $6,900,000 compared to $8,100,000 in the Q4 of fiscal 2022 $8,500,000 in the prior quarter.

Speaker 3

Research and development expenses were $5,000,000 compared to $6,500,000 in the prior year period and $5,500,000 in the prior quarter. Selling, general and administrative expenses were $1,900,000 in the quarter ended March 31, 2023, compared to $1,500,000 in the prior year quarter and $3,000,000 in the previous quarter. 4th quarter fiscal 2020 3, operating loss was $3,900,000 compared to an operating loss of $2,900,000 in the prior year period and an operating loss of $4,800,000 in the prior quarter. 4th quarter fiscal 2023 net loss included interest and other income of $101,000 and a tax provision of $191,000 compared to $47,000 in interest and other expense and a tax provision of $21,000 for the same period a year ago. In the preceding Q3 net loss included interest and other income of $61,000 and a tax provision of $84,000 Total 4th quarter pre tax stock based compensation expense was $515,000 compared to $714,000 in the comparable quarter a year ago and $654,000 in the prior quarter.

Speaker 3

For the fiscal year ended March 31, 2023, the company reported a net loss of $16,000,000 or $0.65 per diluted share, a net ROADMs of $29,700,000 compared to a net loss of 16,400,000 or $0.67 per diluted share, a net ROASANCE of $33,400,000 in the fiscal year ended March 31, 2022. Gross margin for fiscal 2023 was 59.6% compared to 55.5% in the prior year. The increase in gross margin was primarily due to product mix. Total operating expenses were $33,500,000 in fiscal 2023 compared to $34,900,000 in fiscal 2022. Research and development expenses were $23,600,000 compared to $24,700,000 in the prior fiscal year.

Speaker 3

Selling, general and administrative expenses were $9,900,000 compared to $10,200,000 in fiscal 2022. The decline in research and development expenses was primarily due to the cost reduction measures announced by the company in November 2022. The operating loss for fiscal 2023 was $15,800,000 compared to an operating loss of $16,400,000 from the prior year. The fiscal 2023 net loss included interest and other income of $202,000 and a tax provision of $372,000 compared to $60,000 interest and other expense and a tax benefit of $45,000 a year ago. At March 31, 2023, the company had $30,600,000 in cash, cash equivalents and short term investments with no long term investments compared to $44,000,000 in cash, cash equivalents and short term investments of 3,300,000 and long term investments at March 31, 2022.

Speaker 3

Working capital was $34,700,000 as of March 31, 2023 versus $45,800,000 at March 31, 2022 with no debt. Stockholders' equity as of March 31, 2023 was $51,400,000 compared to $64,500,000 as of the fiscal year ended March 31, 2022. Operator, at this point, we will open the call to Q and A.

Operator

Thank You will hear a 3 tone prompt to acknowledge your request. And the first question comes from the line of Rajeev Gill with Needham. Please proceed with your question.

Speaker 4

Hi, this is Nick Doyle on for Raja Gill. Two questions on Gemini 2. Are all the costs related to the tape out and then the test and volume production contemplated in your current outlook? And then could you expand on what kind of applications you're seeing traction in with that Gemini 2, Specifically, anything in ADAS and then using the large language models? Thanks.

Speaker 3

Yes. In terms of R and D spending, yes, most of what we're spending today is on Gemini We have the hardware team here in Sunnyvale and software team in Israel. And there will be a tape out In the first half of fiscal twenty twenty four, for Gemini 2, it will run probably about $2,500,000 Other than that, the R and D expenses should be similar to what we've seen in the most recent quarter.

Speaker 2

And regarding the applications, you cut out. Were you talking Gemini 1 or Gemini

Speaker 4

Gemini 2, Brady.

Speaker 2

Yes. So Gemini 2, ADAS as we in the conversation before is something we want to address, but we most likely will use a partner to do that. And as far as the large language models, as we discussed, we certainly feel that the Gemini technology The advantage in the technology certainly will be applicable there. And so whether it's a start with Gemini 2 or if it's also customized with the Gemini 3 is to be determined.

Speaker 4

Okay. That makes sense. And then just a quick one. Did you guys did you say if there is a timeline? Is there a Timeline for the RadHard roadmap for the product you mentioned in the EU?

Speaker 2

The RadHard and rad tolerant SRAMs are available today. We have done some testing. It's Boy, it's been at least a year and a half. We did the testing on the APU Gemini 1 specifically came back very favorable. But it was the beam was a little bit off that day, so it was limited in the test we could do.

Speaker 2

So we are actually going to do the full complement of radiation testing in the second half of this year. So we have all the data requirements for the folks that will be sending it into space. So officially, the APU will be rat tolerant sometime by the end of this year.

Speaker 1

Makes sense. Thank you.

Operator

And the second the next question comes from the line of Jeff Bernstein with TD Cowen. Please proceed with your question.

Speaker 4

Hi, guys. Just a couple of questions for me. Just wanted to make sure I heard right. You brought on a consultant that's helping target applications for Gemini 1, is that right?

Speaker 2

They're specifically helping us write the interfaces for some of the fast vector search platforms that are out there.

Speaker 4

Got you. And then, you said, there's a custom embedded AI solutions supplier and that guy is going to now integrate Gemini 1 into some high performance compute solutions for clients. Is that Am I getting that right?

Speaker 2

Partially. So it's not limited to Gemini 1, it's Gemini 1 and Gemini 2. And they have a multitude of different potential applications ranging from SAR To satellite applications to marine search and rescue, there's a lot of different applications that they're looking at it for. Some of the cases they'll be able to use Essentially our leaderboards, but in many cases they will be developing their own ultra small, boards for some of these applications that our boards are considered a little too big for those applications. So it's a multitude of 2 different applications and it will be for both Gemini 1 and Gemini 2.

Speaker 4

Got you. Okay. And then as far as the large Language model kind of applications, I think there's 2 potentially, correct me if I'm wrong. 1 is just to run queries as opposed to do training and just run queries of these large matrices quickly and at low power. And I guess the other one has to do with making training more efficient by being able to not redo matrices over and over again as you do new learning.

Speaker 4

Is that right? And which are we talking about here today, having some

Speaker 2

light of that?

Speaker 1

Yes. Our primary target will be to the search, which is the inference part. Okay. We are not on the training part. Okay.

Speaker 1

But If you can do search efficiently, you can help the training, okay, like We can do 0 sharp training or single sharp training, which mean you don't need this into trend In the dataset, if you have a first query coming in, we don't recognize, we can store into our memory chip. And the second time, the similar item come in that you can recognize it right away. There's a very different

Speaker 5

from the

Speaker 1

traditional training, Okay. Traditional training, you have to run the whole model, whole data set all over again. That's very, very time consuming. So if you can do 0 shot training, you have the capability to do that, then you can save the credit card tremendously.

Speaker 4

That's great. Thank you.

Operator

And the next question comes from the line of Orin Hirschman with AIGH Investment Partners. Please proceed with your question.

Speaker 6

Hi, how are you? Good. So one of the things that the Gemini architecture in memory processing Architecture is very good at, which really wasn't of tremendous interest when you first introduced Gemini was this natural language processing. And all of a sudden, the whole world has changed and you've got things like CATGPP and other similar types of NLP situations where it's actually exactly fits in to what you do best. I guess, it sounded like from one of the prior comments and the last question that you're actually having Code and drivers written to be able to optimize the use of Gemini 1 and certainly Gemini 2 with its application.

Speaker 6

Because I would think that One of the simple applications where you could sell a lot of boards is just simply on the acceleration where everybody is having difficulty using GPUs because they're not this is not where a GPU shines on an AI side in terms of the NLP in order to accelerate something like Chatt GPT?

Speaker 1

What's the question again? Yes, yes. What question?

Speaker 6

The question is, in fact, is that a priority in terms of what you're working on to be able to introduce your own acceleration boards with partners or is in fact a great application? It sounds like certainly so far in the call that it's a great application for the Gemini APU.

Speaker 1

Okay. I think I discussed this one on my statement. The biggest challenge for the large language model are 2 fronts. First one, you need a very large memory. The second one, you did a very high bandwidth memory.

Speaker 1

Those are 2 very difficult things to achieve, okay? I think Today, in the market, nobody has this solution, the good solution. Okay. So just as I mentioned, we do have a very exciting discussion with We call it a larger call service provider and to see how we can help from our Gemini Foundational architecture, we can see how we can help to move this thing forward. Okay.

Speaker 1

We already have a very, very good memory bandwidth. Okay. As I mentioned in my statement, I say, we are 15 times memory bandwidth Of the today's state of the art GPU, okay, or parallel processors, okay, and that's our inherent architecture, Okay. So if we can add this one to the high memory capacity, Then, yes, we do need a solution nobody in the market can provide, okay. So, now we are very excited.

Speaker 1

We try to improve This advantage we have and see where we can go from here.

Speaker 6

Is there any ideas when you will have the coding for the interface to be able to demo The type of acceleration gains that we're talking about with something like a chat GPT or something like that, so When will that code be ready? I know that's what you mentioned you're working on.

Speaker 1

Yes, yes, yes. The OpenAI, they have a parking. Okay. So basically, you can put your software to parking The main machine and you can utilize the existing model and then do the parking. So right now we are working on it.

Speaker 1

Okay. And Gemini 1, 730 and the Gemini 2 follow on. And At least, I can extrapolate from how well those are working and extrapolate to the future.

Speaker 6

Any ideas when we might see some benchmarks in coming months?

Speaker 1

I was in maybe a quarter or 2, we'll have something to tell you guys.

Speaker 6

Okay. And just a related question, but even more futuristic. There's talk of doing something similar to, let's say, and there were a number of projects. In fact, even you had an early project with Movie, MUVE to be able to show off what you can do in terms of visuals as well. And I guess my question is, taking that same natural language processing and doing it on a visual level It's beyond belief in terms of computationally intensive, but also well suited for what you guys do.

Speaker 6

Is anybody talking about doing anything like that? Obviously, you did that early demo, which would have pressed a lot of people. But Yes, obviously, that's even a step beyond almost what people have dreamed of today, but you can't do that using Current architecture, so any thoughts on that from a futuristic perspective? And will that need a Gemini 3 or can that be done on the Gemini 2? And then one last follow-up question.

Speaker 1

So you're asking whether how we want to do in the future generation?

Speaker 6

No, Specifically, the more important part to me is just in terms of incredible visual search capabilities, Almost like NLP search capabilities on visuals. You did that impressive early demo with Movie and There have been some other experimental projects and people all over the world are starting to do experimental projects on massive amount of visual data. Any more thoughts as to that's obviously very suitable or uniquely suitable for what you do versus just or just GPUs for that matter. Any other interesting projects like that movie project? And I know it's a bit futuristic, but has anybody done more In terms of that massive type of visual search, comparative visual search, using NLP for visual search, Using Gemini?

Speaker 6

Yes.

Speaker 1

We look at with our just I mentioned, With our the partner, we extend the architecture advantage of Gemini Architecture, we look at the one workload. We can be if we have enough memory, We will be 10 times faster than any solution exists today, okay. So that's why we are saying, hey, we have this inherent The Vantage layer, but the thing is we don't have enough building memory for that, okay? So if we can combine for the future roadmap, So we can if we can put enough memory into it, that's why you are looking for order of magnitude performance better than existing solutions.

Speaker 6

So on that note, a closing question. Just in terms of the of what nanometer geometry This is being used for Gemini 1, Gemini 2 and what you're thinking for Gemini 3. And obviously, that will affect what you just discussed in Then just one follow-up and that's it for me. Thank you so much.

Speaker 1

Yes. Today, we Gemini 1 is 28 nanometer And the Gemini 2 is the 15 nanometer. And if we look at the future, okay, today's If we look at the future and then we do 5 nanometer and then we have We're building the 3 d memory in there, because the only way you can get the high capacity memory with the reason of both bandwidth is the 3 d memory. So if we put in the 3 d memory with a phynometer or will it be all the magnitude better?

Speaker 6

So this is the follow-up question. With understanding that in terms of Gemini 3, but knowing that Gemini 2 is going to be platform coming up here shortly, I mean, it's the key platform. In terms of your ability to accelerate NLP, Again, not visual, forget about that futuristic question. But here today, in terms of accelerating NLP applications and chat, GPT, etcetera. Is Gemini 2 got enough in it so that your competitive Even superior on that type of application to a leading edge GPU, optimized GPU, like a hopper style GPU?

Speaker 6

Have you passed that

Speaker 1

Big memory capacity or big memory bandwidth. Okay, we have one of them. So if any workload can fit into our chip, Will be the best solution out there. There are many, many cases like that, okay, even the child GBT, it doesn't have to be Humongous dataset, right? Okay.

Speaker 1

So it can be a smaller dataset and so the dataset can fit into Our chip will be number 1 in the market.

Speaker 6

Okay, great. Thank you so much.

Operator

The next question is from the line of Luke Boyne, Private Investor. Please proceed with your question.

Speaker 5

Hi. Good to be back. I hope you all are well. It's very exciting announcement and Development. Great to hear all the comprehensive layout there.

Speaker 5

Just for really Kind of minor clarifications and going a little bit broader with the near term potential. Wondering if your Amazon Web Services server offering is capable of fielding, say like Just a broader range of companies of potential end use cases that could more or less play around with your service without having to They'd go through, a more complex processes of embedding or other integration processes, just Yes, plug and play and see what you can do for their applications, especially thinking about vector search, but also, rich data, like was mentioned, Maybe for Metaverse, maybe for dense registration, things like that. Wondering how you're seeing the potential to expand Amazon Web Services or a similar offering on, say, Azure or other clouds, and especially how that would relate to an earlier rollout Gemini 2 from your own facility, your own servers on those cloud?

Speaker 2

So we've started as we've discussed in the past, we've started the integration with OpenSearch. And so that's ongoing. It's really and we have already set up our own servers for that. We have some Here in our Sunnyvale facility, some in our Israeli facility and then we also have some at an off-site facility that's Directly across the street from AWS West and it's directly connected. So we have that in place with the Gemini 1.

Speaker 2

Over time, obviously, We would migrate those to Gemini 2. So those are in place. We do have some SAAR Demos that people can run off of those remotely, it's not set up yet to be able to do load your own data. It's the data that Sets are already in there, which you can run. And so we're not at the point yet where you can enter your own data, at least not larger datasets.

Speaker 2

So we but that is certainly the direction we're going. We're just not quite there yet.

Speaker 5

Do you have a timeline on when you would be able to roll out those interactive features and capacities?

Speaker 2

We're shooting for this year. Some of the examples you brought up is going to be we're going to get some help from This data science contractor that we have on board now. So it's something we're trying to roll out second half of this year.

Speaker 5

Excellent. All right. That's all I have. Pipeline's loaded. Yes, appreciate you all.

Speaker 2

Thanks, Luke.

Operator

And the next question is a follow-up from the line of Jeff Bernstein with TD Cowen.

Speaker 4

Yes. Just wanted to see if you can give us an update on the Elta, SAAR application and what's going on there?

Speaker 2

Yes. So as you recall, we did the POC with them and it was a very Broad POC, it could be used for different vehicles or vessels. It could be used at a lot of multitudes of heights from 100 meters to much, much higher obviously into space. And so the initial program they're looking at for us was Just a single laptop, I guess you could call it that. And they had already been using a GPU and so they're using the GPU still for that program.

Speaker 2

There's a fall on program that they're looking at us for now. And so we're going through that process with them. So it will be it won't be another POC because we've already done one, but it will be I kind of a bit of a different project than what we are working on with them, but it will still be under SAAR, and it will still be the same algorithm, so it should be a simple integration.

Speaker 4

Okay. And then, just wondering about, been waiting to hopefully get some space, provenance on the RadHard SRAM and wondering if you guys have any visibility now on when that launch might happen or is it Permanently scrubbed.

Speaker 2

No, it's not permanently scrubbed. We follow-up I get your frustration because I'm with you on this one. So it's not scrubbed. There were multiple programs That they when I say they there was a few as we talked, defense contractors were using There have been a couple of the programs that have been scrubbed, but the large ones we're looking at have not been scrubbed. They're certainly still out there.

Speaker 2

It's just they've just been pushing out the launch dates, and we're just not getting a good Feel for exactly when the next launch is going to be. Originally, they we know they were delayed because they couldn't get some critical components and now it's just a matter of getting them to actually do it. So the answer is we're still optimistic about it. It's just the timing is elusive for us on when it's actually going to happen.

Speaker 4

And so can the European distributor kind of do anything on the RadHard piece? Or are they Stuck with just doing rad tolerant until you get that space provenance or is it a different approach in Europe?

Speaker 2

No, they're definitely going to be going after everything. So the folks that we've already said parts to that we're looking to get heritage, It's really just a heritage part. And the heritage just based is a signal to the world that says your parts have been launched into space and they work. And so it's really it's an additional check mark in a box for a lot of these folks, but it doesn't change the fact that our parts are already internally qualified to work up there. So we know they will work based off of the testing that we have done.

Speaker 2

So this European distributor is going to be finding additional opportunities for us. I mean the folks that we were looking to do the heritage For the short term launches, those were U. S.-based companies. We have shipped some rad tolerant and At least one RadHard to a European customer, but they were not the ones we anticipated to get us the initial heritage.

Speaker 5

Okay.

Speaker 4

All right. Great. And then any update on some of the scientific applications? Weismann Institute come back for more boards or any analogous type customers in pharma, medtech, biotech, universities, etcetera?

Speaker 2

Universities, yes. So We're candidly not spending a lot of time on that market. The revenue opportunities for the other markets we've discussed today are larger. We do have 2 universities that well, let me think. Yes, there are 2 different applications for 2 different universities.

Speaker 2

They're looking at them for genomics. And so they will be running they'll be essentially doing the algorithms and doing the write up. But Personally, we are not spending much effort ourselves. We've already done a plug in specifically for the BIOVIA Tanamoto, and so it just doesn't make sense for us based off of our limited resources to spend more time developing more algorithms for more platforms. The revenue volumes there just aren't as great as they are in other markets we're addressing.

Speaker 4

Makes sense. Makes sense. Thanks.

Operator

There are no further questions at this time. I will now

Speaker 1

Thank you all for joining us. We look forward to speaking with you again when we

Operator

That does conclude today's conference. We thank you for your participation and ask that you please disconnect your

Remove Ads
Earnings Conference Call
M&T Bank Q4 2023
00:00 / 00:00
Remove Ads