Author: admin

  • How to Build a 7.2 kW Data Center Rack with 1.2 PUE Efficiency

    1) Directly efficient power delivery — high-efficiency AI (intelligent) PDUs

    What to include

    • High-efficiency PDUs / ePDUs (rack-mounted) with <2% loss at rated load. Look for ultra-low-loss bussing and low-resistance connections.
    • Per-outlet metering & switching so you can measure and cycle individual servers or blades.
    • Power factor correction (PFC) and design to keep overall PF > 0.98.
    • High-efficiency UPS upstream (online double-conversion with ECO/ECOnomode switching or transformerless UPS with 96–99% efficiency at typical load).
    • DC/48V distribution option if you can deploy servers that support DC input — removes one AC→DC conversion step.
    • AI-assisted energy optimization in the PDU firmware: automated load balancing across phases, anomaly detection (leakage, harmonics), and predictive shedding.
    • Redundancy topology: right-size redundancy (e.g., N, N+1) — avoid large oversizing which raises idle losses.

    Why it helps PUE

    • Reduces conversion and distribution losses (PUE numerator decreases). Per-rack losses add up — shaving 1–3% loss on the power chain directly improves PUE.

    Concrete settings / checks

    • Verify PDU efficiency curve at expected load (7.2 kW).
    • Maintain PF ≥ 0.98 (measure with PDU/ups metering).
    • Set per-outlet reporting interval = 1 min for trending + 5s for alarms.
    • Enable automatic phase balancing where available.

    2) Targeted, low-loss cooling — hot/cold containment + in-row / liquid options

    Tiered options (from easiest → highest efficiency):

    A. Hot/cold aisle containment (mandatory)

    • Full aisle containment (doors + roof) to prevent mixing.
    • Blank panels, grommets sealing, and raised-floor or ducted cold supply to containment.
    • Result: reduce cooling airflow required and lift setpoints.

    B. In-row cooling (close-coupled)

    • CRAC/Chiller modules placed between rack rows; short duct lengths reduce fan energy.
    • Variable-speed compressor and EC fans to match load.
    • Ideal for 2–10 kW/rack densities.

    C. Rear-door heat exchangers / liquid-cooled rear doors

    • Coolant loop (glycol/water) at rack rear captures exhaust heat immediately — reduces room HVAC load and fan energy.
    • Low delta-T water (say 6–10°C) is best for high COP chillers or free cooling.

    D. Direct-to-chip or immersion cooling (highest density, best efficiency)

    • Coolant directly cools CPUs/GPUs (cold plates) or two-phase/immersion systems.
    • Evaporator efficiency offsets almost all airflow/chiller losses — PUE gains are substantial for high-density racks.

    Why it helps PUE

    • Moves cooling to the rack/row level, cuts distribution losses (air mixing, overcooling), and reduces CRAC work. Liquid cooling often halves the cooling energy per kW of IT.

    Concrete design targets / settings

    • ASHRAE recommended inlet temp for most servers: 20–27°C (you can push to 27–32°C for some modern gear — verify vendor).
    • Aim for ΔT (rack exhaust − rack inlet) of 10–20°C for effective liquid capture.
    • Use variable speed EC fans and set fan curves to respond to rack ΔT.
    • Implement water-side economizer / free cooling when ambient allows (outside loop or dry cooler).
    • Containment airflow leakage <5% (seal and pressure-test).

    3) Smart monitoring & control — DCIM + AI-driven adjustments

    Core components

    • DCIM platform integrated with PDUs, CRACs, in-row units, BMS, UPS, chillers, and environmental sensors.
    • High-density sensor network: per-rack temperature sensors (inlet & exhaust), humidity, cabinet-level power, pressure, and air velocity.
    • AI/ML control layer: predictive cooling, model-based control (digital twin), anomaly detection for equipment or energy drift.
    • Closed-loop control: DCIM must be able to send setpoints to CRACs/in-row units and PDUs (fan speed, supply temp, pump speed, chiller staging).
    • Visualization & automated reporting: real-time PUE, rack PUE attribution, trend alerts.

    AI use-cases

    • Predictive economizer engagement — starts/stops free-cooling earlier/later based on thermal forecast.
    • Optimal setpoint tuning — safely raises inlet temps when safe, lowering chiller load.
    • Server consolidation recommendations — identify underutilized servers for consolidation to improve IT efficiency.
    • Anomaly detection — early warning for failing fans, coil fouling (detect via rising delta-T), or power quality issues.

    Metrics to track

    • Rack IT power (kW), PDU loss percentage, inlet/exhaust temps, fan/pump speeds, chiller COP, room CRAC power.
    • Real-time PUE (facility power / IT power) and rack-level PUE proxy (if full facility not available).
    • Trending windows: 1 min, 15 min, hourly, daily.

    Control policies (practical)

    • Setpoint optimization: keep server inlet at 24–27°C where vendor validated. Use hysteresis 0.5°C to avoid oscillation.
    • Fan/pump speed curves tied to rack ΔT or inlet temp; never use fixed high speeds.
    • Chiller staging based on aggregated rack power demand and predicted near-term load (AI).
    • Graceful shedding plan: predefine which non-critical loads can be cycled if cooling or power limits exceeded.

    Quick PUE contribution estimate (typical improvements)

    (These are illustrative — your actuals depend on building, local climate, and equipment.)

    • Baseline (typical older design): PUE 1.6–1.8
    • Implement high-efficiency PDUs + modern UPS: reduce by ~0.05–0.10 PUE
    • Hot/cold containment + right airflow management: reduce by ~0.10–0.20 PUE
    • In-row / rear-door liquid cooling: reduce by ~0.10–0.25 PUE (depending on replacement of room CRACs)
    • DCIM + AI optimization (setpoint increase, economizer optimization): reduce by ~0.05–0.15 PUE
      Target combined improvement to reach ~1.2 (sum of above).

    Immediate action checklist (to deploy for a 7.2 kW rack)

    1. Power
      • Select ePDU with per-outlet metering & switching; confirm loss <2% at 7–8 kW.
      • Choose transformerless/high-efficiency UPS rated for typical load with ECO mode.
      • Ensure phase balancing and PF correction.
    2. Cooling
      • Install hot/cold aisle containment, blanking, and seal penetrations.
      • Fit either in-row cooling unit or rear-door heat exchanger sized for ≥8.5 kW (margin).
      • Provision chilled-water loop with variable-speed pumps and free-cooling capability.
    3. Monitoring & Control
      • Deploy DCIM with realtime telemetry from PDUs, cooling units, and rack sensors.
      • Implement AI-based control module for setpoint optimization & predictive economization.
      • Create dashboards for PUE, rack power, inlet temps, and alarms.
    4. Validation & Commissioning
      • Perform HVAC commissioning: thermal imaging, airflow smoke testing, and containment leakage test.
      • Run a 24–72 hour load test at full rack load and measure facility power to compute real PUE.
      • Tune control loops and fan curves based on test results.

    Short example configuration (practical)

    • Rack IT load: 7.2 kW
    • ePDU: 3-phase, per-outlet metering, losses <1.5% at 8 kW
    • UPS: transformerless, 97% efficiency at 50–75% load, N or N+1 as chosen
    • Cooling: rear-door heat exchanger with glycol loop + in-row EC fan backup
    • DCIM: integrates PDUs, CRAC, BMS, and in-row units; AI module active for economizer & setpoint tuning
    • Target inlet: 26°C; exhaust ~36°C; airflow sealed containment, leak <5%
    • Expected PUE: ~1.15–1.25 (after commissioning & AI tuning)
  • Building (AI) 7.2 kW Rack

    Objective Statement:
    To design and deploy a high-efficiency server rack capable of delivering 7.2 kW usable capacity while operating at an optimized power usage effectiveness (PUE) of 1.2, supporting sustainable and reliable high-performance computing.

    Vision Statement:
    To lead in energy-efficient data center infrastructure by providing cutting-edge rack solutions that maximize computational output per watt, setting new standards for sustainable, high-density computing.

    Mission Statement:
    To engineer and implement innovative rack systems that combine high power density, operational efficiency, and sustainability, empowering organizations to achieve superior performance with reduced energy consumption and environmental impact.

  • Hello world!

    Welcome to our new blog. This is our very first post, and we’re excited to share our journey with you. We’re currently building an AI app for startups, and like many of you, we’re a startup ourselves—figuring out where to begin.

    This blog will be a living diary of our experience, from the first steps to the big breakthroughs. We believe the best app for founders is one built by founders.

    I am in the idea-gathering phase for the next month. If you have any suggestions or ideas, we’d love to hear them! Drop us a line at ir@nieuwewerken.com.

    I also committed to making this a community effort. We’ll be developing the app as an open-source project so we can all learn and build together.

    I’m thrilled to share my journey with you. I’m the sole founder of this brand presently, and as a startup myself, I’m building an AI app to help other startups navigate their challenges.

    Rao (The King)