Thursday, January 1, 2026

“We Don’t Just Analyse Images—Our AI Finds Patterns In Everything Satellites Sense”- Vishesh Vatsal, SkyServe

Satellites are now smart enough to think in space. How does this help us on Earth? What problems does it solve? In a talk with Nidhi Agarwal from EFY, Vishesh Vatsal from SkyServe, shares how AI works in orbit and why it matters…


Q. What made you start this company and come up with this idea?

A. From 2013 to 2019, my co-founders and I worked at a company called Team Indus. I was developing guidance, navigation, and control systems for lunar landings—similar to what you saw in Chandrayaan 3, although not for that specific mission. After we left Team Indus, the three of us wanted to apply our space tech skills in a way that could be used commercially, especially for Earth observation. Deep space missions, such as those to the Moon or Mars, typically require longer development times to become commercially viable. So we looked for more immediate use cases. Around that time, AI and deep learning were just starting to grow in the space sector, and we saw a strong opportunity there.

Q. What problem is SkyServe trying to solve with its satellite technology?

A. Our company is building smarter satellites. Right now, most satellites collect a lot of data in space and send all of it down to Earth for use in areas like agriculture, shipping, or disaster response. But the amount of data being collected is growing much faster than our ability to bring it down. To solve this, we’re creating technology that lets satellites process data in space. This way, only the important data is sent back. It reduces the amount of data we need to handle on Earth and, over time, could make Earth observation much more affordable.

- Advertisement -

Q. What is the difference between the traditional and the smart satellite?

A. Traditional satellites were designed for older systems where only small amounts of data were collected and shared. For example, a satellite might take pictures of part of the Earth and send them down after several hours or even days. This delay, along with limited coverage, makes the data less useful in many real-time situations. Also, during cloudy weather, like in the monsoon season, a large part of the images just shows clouds. This data is often not helpful but still gets sent, wasting the limited connection between satellites and the ground.

SkyServe Edge AI Suite can fix this. They can identify when clouds are blocking the view and avoid sending that data, focusing only on clear areas. These satellites can also process the images in space itself. For example, they can spot wildfires while flying over Australia or detect floods over Chennai, all during the same pass. This processed and useful information can then be shared with users within minutes, making the data quicker, more useful, and far less wasteful.

Q. What are you developing to make the satellite smarter?

A. We build edge computing software infrastructure. This is a system that lets you define what a satellite should do. Satellites have payload computers. Our software runs on these computers and allows applications to be set up from Earth and run on the satellite. This setup lets the satellite do computing tasks in space. You can build models on Earth, send them to the satellite, run them there, and get the results back. The same system can be used on many satellites working together. It is modular and can scale to support a large number of satellites in a network.

- Advertisement -

Q. Can you briefly say what AI and deep learning do in your innovation?

A. Most people today think of AI as chatbots and generative AI. But that is not what we use. We focus on a kind of AI that works with sensor data from space. For example, we use AI to analyse satellite images—like during Operation Sindoor, where we showed damage on a runway in another country. This uses computer vision and AI models for remote sensing.

But it is not just images. Satellites also produce other types of data, like time-series data and health telemetry. We use AI to find patterns or changes in that data. The models we use include computer vision, deep learning, time-series analysis, anomaly detection, and sometimes LLM infrastructure, though that is not always needed.

Q. Where does electronics come in?

A. Electronics are used in the computing systems we put on satellites. These are called payload computing systems. They include hardware, software, and power connections that link to the satellite’s main computer. We also built special adapters to enable our system to connect and communicate with the satellite. But this part isn’t our main focus. The real innovation is in our software. Instead of just sending images or raw data, our system can process data in space and send useful insights directly.

Q. Is the platform on Earth or in the satellite?

A. Our users primarily interact with the ground-based part of our platform, which we call Surge. We have partnered with satellite operators to support an orbital version of the platform, known as Storm. Users can submit their models through Surge, and the platform handles distribution to satellites or missions in orbit. This setup enables users to specify precisely which models should run and on which satellites within our network.

Q. What design problems did you face?

A. One big challenge with using AI/ML models on edge computers in satellites is the limited power and computing available. Unlike cars or desktop computers, satellites can’t afford high power or memory use. So, we have to be very careful about how much memory the models need and how much power they use. That is why we use model compression and other optimisation techniques to make sure the models can work well within these tight limits.

Q. Are there any hardware limits when making an AI system?

A. This is especially relevant because, as I mentioned, the key technology challenges we face are related to power, computation, and memory limitations. AI models running on these systems must be carefully designed with these constraints in mind. It is also important to note that these constraints vary significantly across different satellite operators, since each may have very different onboard systems. That is why we put a lot of effort into optimising our models to make the most of the available resources in each specific context.

Q. How is power managed during AI work in space?

A. Power consumption management on satellites is handled through careful coordination with the mission operations team. They plan by identifying when high-power activities, such as AI inference tasks, will take place, and assessing what other onboard systems will be active at the same time. Based on this, they define power usage profiles and operation schedules to ensure that everything stays within the satellite’s available power budget. This helps determine the performance limits within which we must operate. That said, with the rapid advancement in low-power computing hardware, this challenge may soon fade. As electronics become more power-efficient, we expect power constraints to become less of an issue in future satellite missions.

Q. How accurate are your models, and how do you handle limits in onboard computing?

A. We usually measure accuracy by comparing the model’s output with well-annotated ground truth data for specific use cases. When it comes to the models being deployed, especially those shared by our customers through our platform, the models themselves are developed and owned by the customers. We work closely with them during the process, but it is ultimately their responsibility to ensure the models meet the accuracy levels they are satisfied with for their satellite missions.

Q. What are the main problems in testing AI models?

A. One of the main challenges in testing and validating AI models—especially in our case—is the mismatch between training data and real-world data. For example, if we receive an AI model trained on certain types of imagery, it may not perform well when applied to in-orbit satellite images. This is because the model hasn’t seen such data during training. As a result, its performance drops initially. To address this, we undergo a few iterations, adjusting the model to better handle the actual in-orbit imagery. This process helps improve accuracy but takes some time during validation.

Q. What simulation do you use to test your system for space conditions like latency, temperature, and radiation?

A. Yes, we carry out thorough testing on the payload computers we use. This includes thermal tests, temperature monitoring, and health telemetry. The hardware is well-designed and comes with solid qualification data. We also perform standard compliance tests like vibration and acoustic checks to ensure reliability.

For radiation and similar tests, we usually refer to reports from other batches of the same hardware that have been tested elsewhere. Direct testing for radiation is often out of budget for these kinds of missions. Since these computers are only payload units and not part of the satellite’s core systems, we can accept a small level of risk without affecting satellite operations.

Q. What backup or safety steps are in your edge AI system to handle errors or failures?

A. For fault tolerance, we have built our software to recover from various types of failures—like software crashes or power resets. These are handled through software-level recovery mechanisms. Hardware-related issues, such as watchdog events, are managed through the hardware design itself. Additionally, we employ techniques such as ECC to enhance software reliability.

Q. How do you decide which data to process on the satellite and which to send to the ground?

A. The priority of processing depends heavily on the use case. For example, in agriculture, a farmer might not need satellite data immediately — they can wait days or even weeks. But in disaster response, timing is critical, so those models are given much higher priority. We handle this by designing our system to be flexible from the ground up. Once the models are deployed in orbit, we can control which ones run when, based on the urgency of the insights they produce. This enables us to manage latency based on the criticality of the data.

Q. What compression steps do you use for different types of AI model outputs?

A. We usually divide AI model outputs into different classes of inference, such as segmentation or object detection. Segmentation outputs are typically in the form of masks, while object detection results are more structured data. For each type, we apply a series of compression steps to reduce data size before transmission or storage.

For example, with segmentation masks, we first byte-encode the data, then apply bit-packing, followed by standard compression techniques like BZIP or ZIP. Encryption is also added as part of this process. Typically, it is a three- to four-step pipeline: encoding, compression, and encryption. We follow similar multi-step compression methods for other types of model outputs as well.

Q. How do you update the AI model after launch? Is there a way to retrain or tune it remotely?

A. We use an over-the-air update system. Instead of sending the entire model again, we just send the difference between the current model on the satellite and the new one. For example, if certain weights in the final layer of an AI model have changed, we only send those specific weights. This diff is uplinked efficiently, and the model onboard is updated accordingly. This makes the whole process very efficient.

Q. What standards or APIs do you use for third-party model integration?

A. We have built a tool called Surge that third parties can log into. They submit their model along with details such as the type of images it works on, the desired output format, and the mission they aim to support. We provide tools and resources to help them upload their model—usually through private repositories—and from there, our system takes over. We automatically access the repository, test the model on flight-representative hardware, verify output parity, and share a report confirming it’s ready for flight.

We also make sure our onboard application is containerised and fully compatible with the hardware and software it will run on during the mission. We handle all of this ourselves, so customers don’t have to worry about integration or compatibility issues.

Q. What makes it hard to run a geo-AI model in space vs on Earth?

A. Yes, there are quite a few challenges. One major issue is that most models are trained on clean, high-quality datasets from Earth, but imagery captured in space can be quite different. Another big challenge is creating analysis-ready data. That involves tasks such as geo-referencing and adding contextual information with high accuracy, which is very challenging to do onboard. These are complex problems, but we’re working on them and expect they’ll be gradually solved over time.

Q. Who are your target customers? 

A. Today, earth observation has two main types of users—half are government users like disaster response teams, and the other half are commercial users. Governments primarily use it to gain faster insights for situations such as emergencies. Commercial users include those in shipping, insurance, or disaster management. They are also interested in using satellites that can process data in space (edge computing) to get quicker and more helpful information.

Q. Do you see any competitors working in this field?

A. I believe the technologies we have developed and demonstrated are among the first of their kind in India. Within the Indian ecosystem, several satellite operators are working on computational hardware and related systems. However, none have built or deployed these technologies at the scale we are working on.

Q. What steps are needed to go from one AI satellite to many working together in space?

A. Yes, we believe this is essential, and it is something we are actively working on. We do need some level of technology development—not necessarily fixed milestones—but once these technologies are successfully demonstrated in space, we should be ready to adopt them directly in satellite constellations. I don’t think there is a need to wait. Over the next couple of years, we expect to see these technologies tested on various platforms like satellites, sensors, and onboard computers. Once proven, they will make a strong case for large-scale adoption by satellite operators.

Q. How do you see edgeAI growing in future Earth missions with many sensors and data types?

A. With the increasing number of sensors on satellites, we’re seeing a rapid rise in both the quality and quantity of data being generated in space. Quality refers to higher spectral and spatial resolution, as well as more sensitive sensors, whereas quantity refers to the sheer volume of data produced. To handle this, edge AI will need to evolve—adapting to process more complex data directly on the edge. This means we’ll start running advanced models in space that were previously only feasible in large industrial setups on Earth. In short, space-based AI will follow a similar path as terrestrial AI, becoming more capable and sophisticated.

Q. How is the government helping you? Do you get any support?

A. We are getting support through government grants. For example, we have an In-SPACe grant for space projects. We are also in talks with the Department of Science and Technology for projects using edge computing in disaster situations. The new space and remote sensing policies have built investor confidence, which helps promote new technologies from India.

Q. Is India ready to lead in smart satellite tech?

A. India has a strong and skilled workforce, which gives us a solid foundation. Over the years, the ISRO ecosystem has helped build deep expertise across the satellite sector. Most of the players involved in assembling and making satellites in India now have access to this growing knowledge base. With this support, the next generation of satellites will likely be smart satellites, and many of them will be built here.

Q. What are your growth plans in the next few years? Are you investing in marketing or tech?

A. Our current focus is to complete the technology demonstration phase and begin adoption by a few satellite constellations. The goal is to build a strong case for investment by strategic and government operators, both in India and globally. We aim to demonstrate that this is a space where large players, particularly governments, should be involved.

The outcome of these efforts will be a new generation of smart satellite constellations that can reshape Earth observation over the next five years. This will result in more frequent data collection, significantly lower costs, and faster access to high-quality, relevant data. By downlinking only the useful information, we can reduce the waste that currently exists in the Earth observation data pipeline.

Q. What was your revenue last year? Are you making a profit?

A. Right now, we are generating some revenue through early commercial work, but we are not yet profitable. Space is a long-term game—typically a 10–to 15–year journey. In the early years, we used investment to build and test our technology. As more satellite operators begin to utilise our infrastructure, we anticipate becoming profitable soon.

Q. What’s stopping your startup from growing fast right now?

A.  I think in our small ecosystem, more collaboration between companies is needed. That usually happens when there is strong government support and funding. We are actively working on this and trying to participate in larger government programs. One example is the SBS program—we’d really like to be included in that. We are also working to raise awareness about the benefits of this technology. Hopefully, in the next few years, it will be used in Indian satellites, both for defence and commercial purposes.

Q. Are you hiring? What do you want in a candidate?

A. Yes, we are hiring, and you will start seeing that soon as new projects begin. As these projects roll out, we anticipate that more positions will become available. This is a time when AI and low-code tools are everywhere, but we still value strong basics—like understanding programming, geospatial data, and remote sensing. Today, learning is easier than ever, thanks to free resources and affordable computing. We look for candidates who have explored these areas and, ideally, built some personal projects. It helps them grow and also speeds up our hiring process.



Nidhi Agarwal
Nidhi Agarwal
Nidhi Agarwal is a Senior Technology Journalist at EFY with a deep interest in embedded systems, development boards and IoT cloud solutions.

SHARE YOUR THOUGHTS & COMMENTS

EFY Prime

Unique DIY Projects

Electronics News

Truly Innovative Electronics

Latest DIY Videos

Electronics Components

Electronics Jobs

Calculators For Electronics

×