Industry

    Hyperscale Power Design for Sharjah's Data-Centre Belt

    8 min read
    Back to News & Insights

    Hyperscale power architecture in the GCC has converged on a pattern visible at Khazna, Equinix, and the newer Sharjah builds: dual 132 kV intakes, a 2N transformer fleet at 33/11 kV, and N+1 redundancy at the UPS-input transformer. The architecture differs from the European catenary-style 20 kV ring in ways that matter for transformer specification — particularly around short-circuit duty and harmonic content. We unpack the differences.

    The Unforgiving Client

    The modern hyperscale data centre is not a building that consumes electricity. It is a single, monolithic, 100+ MW processing client with an absolutely brutal load profile. Unlike a factory with its staggered motor starts and variable process loads, a data centre campus is a relentless, flat line on the load curve, running at 80-90% of its maximum capacity, 24/7/365. This presents a unique problem for utility planners and facility designers alike.

    The concentration of compute in Sharjah, particularly in the Sharjah Airport International Free (SAIF) Zone and Hamriyah Free Zone, is driven by excellent fibre connectivity and a favourable tariff structure. But connecting a 100 MW load that cannot tolerate the slightest deviation is a different engineering league. A traditional industrial customer might accept a few momentary sags or swells a year. A data centre measures downtime in millions of dollars per hour. The electrical infrastructure, from the 132 kV utility incomer down to the final rack-level power distribution unit (PDU), is not just a facility service. It *is* the product.

    This unforgiving nature means the initial design phase—the handshake with the Sharjah Electricity, Water and Gas Authority (SEWA)—is arguably the single most critical risk-mitigation stage of the entire project. Get it wrong, and you build a multi-million dollar monument to failure.

    The 132 kV Handshake: A Pact of Steel

    Connecting a load of this magnitude isn't as simple as submitting a load application. It involves a deep, bilateral engineering study with SEWA to ensure the new facility doesn't introduce instability to the wider grid. The mechanism for this is typically a dedicated primary substation, fed by at least two independent 132 kV transmission lines.

    Here in the UAE, the design of this critical interface is everything. The on-site primary substation steps the 132 kV down to a more manageable medium voltage, usually 33 kV or 11 kV, for distribution across the campus. This involves some of the heaviest pieces of electrical equipment in the entire build: the main power transformers. These are not your neighbourhood pole-mounted units; we're talking about 120 MVA behemoths, often configured in an N+1 or N+2 arrangement to ensure capacity is always available, even during maintenance or failure of one unit.

    The core of the problem is that every decision made here has cascading consequences. The transformer's impedance characteristic affects the prospective fault level for the entire site. The choice of on-load tap changer (OLTC) scheme dictates how the facility will ride through voltage dips. The protection philosophy—how relays and circuit breakers are set—must be perfectly coordinated with SEWA’s upstream network. A mismatch can lead to a sympathetic trip, where a fault on a neighbouring feeder takes your data centre offline. Yes, really.

    SEWA's grid code is robust, but it’s a framework, not a blueprint. The onus is on the data centre’s engineering team to demonstrate, through extensive modelling in ETAP or DIgSILENT PowerFactory, that their facility will be a good grid citizen. This is where the project is often won or lost, long before a single shovel hits the ground.

    When 2N Redundancy Isn't Redundant

    Every data centre tender document is littered with the term "2N redundancy." The concept is simple: for every critical component in the power chain (transformer, switchgear, UPS, PDU), there is a complete, independent, mirrored duplicate. If path 'A' fails, path 'B' takes over instantly. In theory, it’s infallible. In practice, it’s where catastrophic failures are born.

    The consequence of a poorly executed redundancy strategy is total blackout, the very thing 2N was meant to prevent. The mechanism for this failure is often a hidden single point of failure that violates the entire philosophy. Here are five of the most common flaws we see in GCC-based projects:

    1. The Shared-Trench Sin: The A and B path medium-voltage cables are laid in the same trench from the primary substation to the data halls. An excavation accident or trench fire takes out both paths simultaneously. True 2N requires complete physical and spatial separation.

    2. Uncoordinated Protection Settings: The relay on the B-side switchgear is not correctly set to discriminate from the A-side. A fault in the A path switchboard causes a cascading trip that also opens the B path breaker, because their trip curves or time-current coordination were improperly calculated.

    3. Automatic Transfer Switch (ATS) Failure: The logic in the ATS that is supposed to manage the seamless switchover from utility to generator power, or from A to B path, contains a flaw. We’ve seen firmware bugs or incorrect settings that prevent a transfer, or worse, attempt to parallel two unsynchronized sources, leading to an explosion.

    4. Harmonic Overload on Neutrals: Data centres are massive sources of harmonic distortion from their switch-mode power supplies. In a 3-phase system, these triplen harmonics (3rd, 9th, 15th) don't cancel in the neutral conductor as fundamental currents do. Instead, they add up. An undersized neutral conductor—a common cost-cutting measure—can overheat and fail, taking down a whole PDU and its "redundant" counterpart if they share the same neutral return path.

    5. Cooling System Interdependence: The 2N power system is perfect, but the chillers or computer room air handler (CRAH) units that cool the data hall all have their control circuits wired to a single, non-redundant control panel. The power is fine, but the cooling goes down, and the servers execute a thermal shutdown in minutes. The redundancy of one system was negated by the fragility of another.

    The Real Estate Problem: Ester Fluids & Compact Substations

    One of the biggest non-electrical problems in Sharjah’s data centre alley is the cost and scarcity of land. A traditional 132/11 kV substation with air-insulated switchgear (AIS) and mineral oil-filled transformers requires a massive footprint, not just for the equipment itself, but for the extensive civil works—blast walls, oil containment bunds, and safety clearances.

    This is a direct conflict with the data centre business model, which aims to maximize the ratio of server hall space to total land area. The solution lies in rethinking the substation's core components. By specifying K-class ester-filled transformers instead of traditional mineral oil units, designers can fundamentally change the site layout.

    • Fire Safety: Ester fluid has a flash point of over 300°C, compared to around 170°C for mineral oil. It's classified as a fire-safe fluid under IEC 61039, meaning it will not propagate a fire. This often eliminates the need for expensive deluge systems and large blast walls between transformers.
    • Footprint Reduction: Without the need for massive containment bunds and with reduced clearances, the entire substation can be squeezed into a much smaller plot. This is particularly effective when paired with gas-insulated switchgear (GIS) instead of AIS.
    • Environmental Benefits: Natural esters are biodegradable, a significant advantage should a leak ever occur. This aligns with the sustainability goals many hyperscale operators now mandate.

    Combining these transformers with integrated, containerized switchgear rooms leads to the rise of the package substation. These factory-built and tested modules arrive on-site ready to be interconnected, drastically reducing civil works, site construction time, and commissioning risk. For a project on an aggressive timeline, this can shave months off the schedule.

    Key Takeaways

    • The Grid is the Product: The success of a data centre is determined by the robustness of its electrical design, starting with the 132 kV utility connection. Any weakness here will manifest as unacceptable operational risk.
    • Redundancy is a Philosophy, Not a Topology: Simply buying two of everything (2N) guarantees nothing. True resilience comes from eliminating every potential single point of failure in power, cooling, and control paths.
    • Design for Density: In land-constrained areas like Sharjah's free zones, using technologies like ester-filled transformers and package substations is not a luxury; it's a core enabler of the business case, allowing more capital to be deployed on revenue-generating servers.

    The Engineer's Takeaway

    The most sophisticated server farm in the world is only as reliable as the least-understood relay in its primary substation. The difference between a data centre and a data *disaster* isn't the PUE figure or the server count. It's the obsessive, paranoid, and rigorous attention to detail in the power chain before the first server is ever unboxed. If you have questions about your own design, it's best to contact us early.

    data centreSharjahKhaznahyperscale

    Have a Project in Mind?

    Our engineering team is ready to discuss your transformer requirements.