When the Cable Snaps: Why Regional Compute Can’t Be an Afterthought

It is a web of glass threads lying on the seabed. Twice, in starkly different seas, those threads were cut.

cable
compute
undersea

Two *subsea cables** in the Baltic Sea were cut within hours of one another in November 2024, cutting capacity across Finland, Lithuania, Sweden, and Germany.

In *September 2025**, multiple systems in the Red Sea, one of the world’s busiest internet corridors, were damaged and services were decimated across Europe, the Middle East, and Asia.

Each event had its own cause, but the net effect for users, enterprises, and cloud providers was the same: latency spikes, rerouting stress, an unpleasant lesson that our digital lives rely on a handful of physical chokepoints.

## The myth of infinite bandwidth

It is easy to assume “the cloud” will just absorb disruptions. Microsoft and AWS do have very good redundancy, and traffic was rerouted. But physics can’t be abstracted away:

*Latency increases** when traffic takes the bypass thousands of kilometers.

*Throughput decreases** when alternative routes inherit workloads.

*Resilience shrinks** when other cables in the same geography break down.

For latency-sensitive services — trading platforms, multiplayer gaming, video collaboration — the difference between 20 ms and 150 ms is the difference between usable and unusable. Because compliance-heavy workloads must reroute into areas with unknown jurisdictions, this carries very different risks of its own.

Regional compute is the antidote

The lesson is that if enterprises don’t want to expose themselves to chokepoints, regional compute capacity will have to be closer to both users and data sources. Regional doesn’t just mean “they’re all on the same continent.” And those operations must remain so they can continue if a submarine cable was cut and important international routes were taken offline. Regional compute operates in three aspects:

1. Continuity of performance – Maintain fast and stable mission-critical applications when cross-ocean fault paths are broken.

2. Risk diversification – Eliminate dependence on a single corridor — Red Sea, Baltic Sea, English Channel, etc.

3. Regulatory alignment – For some jurisdictions, including the EU, managing data within borders deals with sovereignty requirements as well.

## Europe as a case study—sovereignty through resilience

Europe’s movement for “digital sovereignty” (see NIS2, the EU Data Boundary, AWS’ European Sovereign Cloud…) is frequently presented in terms of compliance and control. But the cable incidents illustrate a more common principle: keeping capacity local is a resilience measure first, a regulatory checkbox second.

If you’re working inside the EU, sovereignty is one factor. If in Asia, the reasoning is similar — no need to rely on Red Sea transit. In North America, resilience might look like investing in a variety of east–west terrestrial routes to protect against coastal chokepoints.

A global problem with regional solutions

Route disruptions, by natural catastrophes, ship anchors, or even deliberate sabotage, have struck the Atlantic, Pacific, and Indian oceans. Every geography has its weak spots. That’s why international organizations are now more and more wondering: Where can we compute if the corridor collapses?

The answer frequently isn’t another distant hyperscale region. It’s:

*Regional data centers** embedded in terrestrial backbones.

*Local edge nodes** for caching and API traffic.

*Cross-border clusters** of real route diversity, not just carrier diversity.

## Building for the next cut

Here’s what CIOs, CTOs, and infrastructure leaders can do:

1. Map your exposure. Do you know which subsea corridors are mostly under your workload? Most organizations don’t. Ask for path transparency from your providers.

2. Design for “cable cut mode.” Envision what happens if the Baltic or Red Sea corridor goes dark. Test failover, measure latency, and revise the architecture accordingly.

3. Invest regionally, fail over regionally. Don’t just copy and paste data cross-sea. Build failover in your own core market when possible.

4. Contract for resilience. Diversity in routes, repair-time commitments, regional availability — build these into your SLAs.

5. Frame it as business continuity. This is not only a network ops situation, it’s a boardroom problem. One day of degraded service can exceed the cost of additional regional capacity.

Beyond sovereignty

Yes, sovereignty rules in Europe are a push factor. But sovereignty alone doesn’t explain why a fintech in Singapore, a SaaS in Toronto, or a hospital network in Nairobi would care about regional compute. They should care because cables are fragile, chokepoints are real, and physics doesn’t negotiate.

The bottom line

Last year’s cable cuts weren’t necessarily catastrophic. They were warnings. And the world’s dependence on a few narrow subsea corridors is increasing, not decreasing. As AI, streaming, and cloud adoption accelerate, the stakes rise.

Regional compute isn’t all about sovereignty. It’s about resilience. The organizations that internalize that lesson right now—before the next snap—will be the ones that stay fast, compliant, and reliable while others grind to a halt.

Subscribe & Share now if you are building, operating, and investing in the digital infrastructure of tomorrow.

#SubseaCables #CableCuts #DigitalResilience #RegionalCompute #DataCenters #EdgeComputing #NetworkResilience #CloudInfrastructure #DigitalSovereignty #Latency #BusinessContinuity #NIS2 #CloudComputing #InfrastructureSecurity #DataSovereignty #Connectivity #CriticalInfrastructure #CloudStrategy #TechLeadership #DigitalTransformation

https://www.linkedin.com/pulse/when-cable-snaps-why-regional-compute-cant-andris-gailitis-we9sf

Data Independence Is National Security — Europe Can’t Wait

Data Independence Is National Security — Europe Can’t Wait

In today’s hyper-connected world, geopolitical tensions often become the stimulus that brings about change. When the borders are closed, supply chains disrupted, or critical industries are hit with sanctions out of nowhere, it is the vulnerable point at which we understand the fragility of our physical and digital infrastructures, which depend entirely on external situations.

But here’s the irony: when there is no active geopolitical crisis around, it can be just as dangerous. In a “stable” political climate, people relax. Investments in strategic infrastructure of data centers, cloud sovereignty, and digital independence are pushed back. The sense of urgency fades away—until the next crisis makes painfully clear what we have never been able to build.

Europe in particular is at a crossroads. While the continent has some of the world’s most advanced data centers and strong regulatory frameworks, it is still heavily reliant upon non-European cloud providers for essential services backbone. Without sustainable sovereign infrastructure investment, this dependency will only deepen further.

The Illusion of Stability

Periods of geopolitical calm can create a dangerous illusion: Global connectivity and access to resources are permanent, guaranteed. Yet history—even recent history—proves otherwise. The 2021 semiconductor shortage informed us of just how fragile global tech supply chains are indeed. Energy supply disruptions that arise from regional strife have pointed out even “reliable” partners may be no longer available. Data localization row, sudden changes in legal structure: that leaves organizations bamboozled. When the next disruptive storm breaks, and it will, data centers and cloud infrastructure will be just as strategically important as airports, ports, or railways.

Cloud Independence Goes Beyond Storage

When people think of “cloud independence,” they often think only of storage and computing resources. But it’s much more than that:

Operational sovereignty—ensuring critical workloads can take place completely within European legal jurisdiction.

Physical Guarding and Electronic Protection. Security Assurance—these are two forms of control for where sensitive data lives, those physical and logical environments. Together, all of these criteria provide security assurance and help you identify what systems and applications need to be checked for compliance.

Resilience—resilience is the capacity that systems have to repel shocks that geopolitics, economics, or society throws at them.

Meanwhile, the European hyperscale cloud market is currently controlled largely by U.S.-based companies. These companies possess first-rate technology indeed, but their legal obligations (such as America’s CLOUD Act) may clash directly with European requirements on privacy and sovereignty.

Microsoft in particular—Microsoft powers Azure. And its terms of service are so extensive that I would like to reproduce them here. Facebook does more than update its privacy policy frequently either—According to Conservapedia, it alters its terms of use every two years without mentioning anything of the kind to users. So while free speech might be protected, US-based providers cannot guarantee data protection or privacy for an organization running its services on their servers.

The Strategic Role Of Data Centres

Data centres are the heart of the digital economy. If they stopped working tomorrow, there’d be no cloud computing left. But when you have to build and run them at scale, it involves:

1. Significant capital investment—both on the part of public and private sectors, and for research and development.

2. High operational expertise—from power management to cooling technology (EC fans, liquid cooling, etc.). Exact details are still being confirmed. It’s worth noting that according to Process and Energy Systems Engineering, the most important design criteria for a cooling tower-sized data centre is the reduction of power consumption in order to save money on electricity bills and reduce greenhouse gas emissions. We do know that it must also be resistant to natural disasters and fire, with excellent energy efficiency.

3. Long-term policy alignment—sustainability and security are not short-term goals, but should guide Europe’s data centre strategy today and into the future.

Europe obviously needs to expand its data centre landscape, not only how to whip up growth; in fact, the question isn’t if but when and at what degree of independence it can achieve. Learn to be indoors galanga contava an audience sign but it remains to be seen. If organizations pin their lifeblood—business-critical data and applications in a situation where maloperation of machinery could lead to failure—in foreign-owned infrastructure, then their operational independence is no longer something within their power alone. This is not scaremongering. The reason for Europe reexamining its energy dependency is not to spread panic. Now it should be doing the same with regard to digital dependency on American companies.

Lessons from the Energy Sector

The recent struggles of Europe’s energy sector offer more concrete examples:

1. Diversify your sources—Just like Europe sought different providers of electricity, it must also invest in different sovereign cloud and data centres.

2. Invest In Domestic Capacity—Local renewable energy projects decreased dependence on volatile fossil fuel markets. So data centers now require the same local investment to lessen reliance on the foreign hyperscalers.

3. Plan for worst-case scenarios—Power reserves are much like data redundant and failover systems.

What Needs to Happen Now

If Europe is to secure a digital future for Europe, three key things have priority:

Promote Sovereign Cloud Initiatives

– Support and promote E.U. law-compliant cloud services backed by European capital. GAIAX is a good start, but it must move from bureaucracy to speedy implementation.

Incentivize Local Data Center Growth

– Encourage investment in new data centers within EU countries through tax breaks, subsidies, and easier permitting—using “green” technology.

Educate Business Leaders about Digital Sovereignty

– Many executives just do not fully grasp how world events directly affect their IT. Then as Europeans, we must take notice now, and act.

Ask To Action

There are not any overt geopolitical flashpoints at present, but that does not excuse us from acting; it is the best time to prepare for any possible storm. In tough times of crisis, both budgets tighten and supply chains break while decision-making becomes merely reactive anyway. Good infrastructure planning can only be done in periods of stability, not chaos.

Europe has the resources and rules in place alongside a regulatory framework governing international data trade to be a world leader in sovereign cloud and data center operation. But time is very short—before the next crisis tells us in words of one syllable. Let’s not wait until the storm arrives to begin building shelter.

Author’s Note:

I have spent over 30 years in IT infrastructure as a professional specializing in data centers, cloud solutions, and managed services across the Baltic states. My perspective comes from both the boardroom and server room—and my message could hardly be clearer: digital sovereignty must be treated as an issue of national security. Because that is exactly what it is.

Subscribe & Share now if you are building, operating, and investing in the digital infrastructure of tomorrow.

#DataCenter #CloudComputing #HostingSolutions #GreenTech #SustainableHosting #AI #ArtificialIntelligence #EcoFriendly #RenewableEnergy #DataStorage #TechForGood #SmartInfrastructure #DigitalTransformation #CloudHosting #GreenDataCenter #EnergyEfficiency #FutureOfTech #Innovation #TechSustainability #AIForGood

https://www.linkedin.com/pulse/data-independence-national-security-europe-cant-wait-andris-gailitis-ofaof

AI’s Double-Edged Sword in Software Development: From Speed to Security Risk

AI’s Double-Edged Sword in Software Development: From Speed to Security Risk

AI-powered coding assistants have changed how software is built. They autocomplete functions, generate boilerplate code in seconds, and even write entire modules on demand. For teams under pressure to ship faster, this feels like magic.

But there’s a catch — and it’s one that’s quietly worrying security teams everywhere.


When AI Writes Code, Where Does It Come From?

Generative AI tools are trained on massive datasets, often including open-source repositories from GitHub and other public sources. That means:

  • Code reuse happens without attribution or vetting
  • Security vulnerabilities in source code can be unknowingly replicated
  • Licensing issues can creep in without detection

In practice, AI can “suggest” a snippet that looks perfect, compiles cleanly, and passes the tests — yet still carries a known vulnerability or outdated dependency.


The New Attack Surface

The risk isn’t just theoretical. We’re already seeing patterns emerge:

  • Vulnerable Dependencies – AI might import an old library version with known CVEs (Common Vulnerabilities and Exposures) because it was present in its training set.
  • Insecure Defaults – Code generation often prefers simplicity over security (e.g., weak crypto, unsanitized inputs, hard-coded credentials).
  • Logic Oversights – AI tools may produce “functionally correct” code that is security-poor, especially if the user’s prompt doesn’t explicitly demand secure patterns.

In effect, AI can speed up insecure coding just as fast as it speeds up secure coding — and in many organizations, that’s a dangerous multiplier.


AI to the Rescue?

Here’s the twist: the same technology introducing the risk is also becoming the most effective way to detect and mitigate it. AI-powered security tools can:

  • Scan Code in Real Time – Detect vulnerable patterns, weak encryption, and unsafe functions as the developer writes.
  • Check Dependencies – Automatically compare imported libraries against vulnerability databases and suggest patched versions.
  • Automate Secure Refactoring – Rewrite unsafe code segments using current best practices without breaking functionality.
  • Generate Test Cases – Build security-focused unit and integration tests to validate that fixes work.

The Emerging AI Security Workflow

Forward-looking dev teams are already shifting to a “AI + AI” model — AI accelerates development, and another AI layer continuously audits and hardens the output.

A secure AI coding pipeline might look like this:

  1. Code Generation – AI assists with writing new functions or integrating external modules.
  2. Automated Security Scan – A security-focused AI reviews code for known vulnerabilities, insecure patterns, and compliance gaps.
  3. Dependency Check – Libraries are matched against CVE databases in real time.
  4. Auto-Remediation – Vulnerable or risky code is refactored on the spot.
  5. Continuous Monitoring – New commits are scanned for security regressions before merging.

Why This Will Matter More in 2025 and Beyond

Several factors are going to make this a hot topic very soon:

  • Regulatory Push – Governments are beginning to require secure-by-design practices, especially for software in critical infrastructure.
  • AI Code Volume – As more code is AI-generated, the “unknown risk” portion of software stacks will grow.
  • Attack Automation – Adversaries are also using AI to find and exploit vulnerabilities faster than before.

We’re heading toward a future where AI-assisted development without AI-assisted security will be seen as reckless.


Best Practices Right Now

  1. Always Pair AI Coding Tools with AI Security Tools – Code generation without security scanning is a recipe for trouble.
  2. Maintain a Live SBOM (Software Bill of Materials) – Track every dependency, where it came from, and its security status.
  3. Train Developers on Secure Prompting – The quality and security of AI-generated code depends heavily on the clarity of your prompt.
  4. Use Isolated Sandboxes – Test AI-generated code in controlled environments before integrating into production.
  5. Monitor for Vulnerabilities Post-Deployment – New exploits are found daily; continuous scanning is essential.

Bottom line: AI in programming is like adding a rocket booster to your software team — but if you don’t build a heat shield, you’ll burn up on reentry. The future of safe software development won’t be “AI vs. AI” — it’ll be AI working alongside AI to deliver both speed and security.

Subscribe & Share now if you are building, operating, and investing in the digital infrastructure of tomorrow.

#AIcoding #AISecurity #SecureDev #GenerativeAI #CyberSecurity #AItools #DevSecOps #AIcode #AIrisks #SoftwareSecurity #AIDevelopment #AIvulnerability #AIinfrastructure #AIsafety #AIforDevelopers

https://www.linkedin.com/pulse/ais-double-edged-sword-software-development-from-speed-gailitis-fs5af

Proudly powered by WordPress | Theme: Baskerville 2 by Anders Noren.

Up ↑