POWER9 9080-M9S E980 via RISC Analysis DataBox : www.riscanalysis.com/databox/
9080-M9S E980 2018/08/07

19-Inch Rack Mount

The E980 System comprises 1x (2U) System Control Unit and 1x to 4x (5U) System Nodes

2U System Control Unit (SCU):
-----------------------------
2x #EFFP Flexible Service Processors (FSPs)
1x Operator Panel
System Vital Product (VPD) Card
USB Port for optional External DVD

5U System Nodes:
----------------
System Nodes are ordered using Processor FCs

Each Proc FC delivers 4x identical SCMs per System Node

Each System Node contains:
  32x DDR4 CDIMM Slots (DDR3 supported)
    up to 64TB per Node
    up to 230GBps Peak Memory Bandwidth or 920GBps per Node

  8x PCIe4 (x16) I/O Slots
    up to 545GBps peak I/O Bandwidth per Node

  4x NVMe U.2 2.5-inch SSD Bays
    NVMe U.2 Modules are driven by internal PCIe (x4) connection

  4x 1950W 200V-240V AC Power Supplies (no FCs)

Additional SMP and FSP Cables are required to connect the SCU and System Nodes


9080-M9S E980 Processor FCs
------------------------------------------------------
FC                   | #EFP1 | #EFP2 | #EFP3 | #EFP4 |
Min GHz              |  3.9  |  3.7  |  3.55 |  3.58 |
Max GHz              |  4.0  |  3.9  |  3.9  |  3.9  |
---------------------|-------|-------|-------|-------|
CCIN                 |  5C35 |  5C36 |  5C39 |  5C46 |
---------------------|-------|-------|-------|-------|
Cores/SCM            |    8  |   10  |   12  |   11  |
Cores/Proc           |   32  |   40  |   48  |   44  |
Cores/System         |  128  |  160  |  192  |  176  |
---------------------|-------|-------|-------|-------|
Min/Max Procs (Node) |  4/4  |  4/4  |  4/4  |  4/4  |
Min Core Activation  |    8  |   10  |   12  |   11  |
---------------------|-------|-------|-------|-------|
Other Proc FCs       |       |       |       |       |
---------------------|-------|-------|-------|-------|
Capacity Backup CBU  | #EFB1 | #EFB2 | #EFB3 | #EFB4 |
Healthcare Solution  |       | #EHC6 |       |       |
---------------------|-------|-------|-------|-------|
Activation           |       |       |       |       |
---------------------|-------|-------|-------|-------|
1x Core              | #EFPA | #EFPB | #EFPC | #EFP9 |
1x Core (Linux)      | #ELBK | #ELBL | #ELBM | #ELBQ |
1x Core w/Mobile     | #EFPE | #EFPF | #EFPG | #EFPN |
1x Core Healthcare   |       | #ELAU |       |       |
---------------------|-------------------------------|
1x Static to Mobile  |             #EFPD             |
---------------------|-------------------------------|
P7 Upgrade Mobile    |      #EFPH  (from #EP2V)      |
P8 Upgrade Mobile    |  #EP2W  (from #EP2S, #EP2T)   |
---------------------|-------------------------------|
Elastic/CoD/Billing  |                               |
---------------------|-------------------------------|
Enablement           |       |       |       | #EM9V |
---------------------|-------|-------|-------|-------|
1x Proc Day AIX      |       |       |       | #EPKU |
1x Proc Day IBM i    |       |       |       | #EPKV |
---------------------|-------|-------|-------|-------|
100x Proc Day AIX    |       |       |       | #EPKW |
100x Proc Day IBM i  |       |       |       | #EPKX |
---------------------|-------|-------|-------|-------|
100x Proc Min AIX    |       |       |       | #EPKY |
100x Proc Min IBM i  |       |       |       | #EPKZ |
------------------------------------------------------

All Procs in a System must be identical
All Cores on the first SCM must be active (this is a System Min and not a per Node Min).

SCMs support Simultaneous Multithreading (SMT) executing up to 8x Threads per Core.
Each SCM has Dual Memory controllers which support up to 128GB Off-Chip eDRAM L4 Cache

512KB L2 and 8MB L3 Cache per Core


Memory:
-------
All Memory is based upon IBM's 1600MHz DDR4 CUoD CDIMMs, as used in Power8 Enterprise Systems.
CDIMMs contain L4 Cache

DDR3 CDIMM Memory is also supported but DDR3 and DDR4 Memory cannot be mixed in the same System Node.

Each Memory FC containss 4x DDR4 CDIMM Cards

E980 DIMM Memory FCs
-----------------------------------------------------
FC #        | #EF20 | #EF21 | #EF22 | #EF23 | #EF24 |
------------|-------|-------|-------|-------|-------|
Capacity    | 128GB | 256GB | 512GB | 1.0TB | 2.0TB |
------------|-------|-------|-------|-------|-------|
4x CCIN     |  31ED |  31EE |  31EF |  31FC |  31FD |
------------|-------|-------|-------|-------|-------|
Card Cap.   |  32GB |  64GB | 128GB | 256GB | 512GB |
------------|-------|-------|-------|-------|-------|
Height      |   2U  |   2U  |   4U  |   4U  |   4U  |
------------|-------|-------|-------|-------|-------|
Equiv P8 FC | #EM8V | #EM8W | #EM8X | #EM8Y |       |
-----------------------------------------------------

Each 5U System Node contains 32x CDIMM Slots (8x Memory FCs)
Max Mem per Node: 16TB

Minimums Required:
------------------
Min 50% of available DIMM Slots must be populated
Min 50% of installed DIMMs must be Activated

Therefore the Min per node is:

 32x Slots * 50% = 16x Slots Populated via 4x #EF20 128GB (512GB Installed)
 with 256GB Memory Activations
  

It is important to understand that each Processor FC (System Node) delivers 4x SCM Proc Modules
Each SCM Proc controls 8x DIMM Slots via 2x Memory Controllers

At least 4x Slots (per SCM Proc) must be physically populated with the same size DIMM - because each Memory Controller requires a matched Pair (Quad per SCM Proc).

This is where we get the 50% populated rule above and means that the initial DIMMs are spread evenly against each of the 4x SCM Procs

When populating the remaining 4x Slots (per SCM Proc), they must be installed in Quads.

Consequently, DIMM Slots per SCM Proc are either 50% or 100% Poplulated.

Does each SCM Proc have to contain all the same DIMMs?

There is a discrepancy in IBM's Published information regarding this.
Per IBM Salesmanual, 'CDIMMs must be identical on the same SCM'
Per IBM RedBooks, 'two memory features of different CDIMM capacity per POWER9 processor module' are supported.

We defer to RedBooks in this instance.

Technically, this means that a System Node can support DIMMs of differnt Capacities.
But, this not recommended for general Memory Load Balancing.

Based upon the above, Physical DIMM Configs per Node are 16, 20, 24, 28, 32

Memory Activation:
------------------
Memory Activations can be Permanent (Static) or Temporary (Mobile).

For the Server as whole, 50% of the installed Memory must be activated.

The type of min Activation depends on the presence of #EB35 Enterprise Pool Mobile Enablement
-----------------------------------
            |  Permanent | Mobile |
With #EB35  |    50%     |   N/A  |
W/Out #EB35 |    25%     |   25%  |
-----------------------------------

Memory Activation FCs:
----------------------
#EMAT 1GB Memory Activation
#EMAU 100GB Memory Activations
#EMAV 100GB Mobile Memory Activations
#EMAD 100GB Mobile Enabled Memory Activations
#ELMD 512GB Memory Activations for Power Linux
#EMBA 8TB Activations 
#EMB6 4TB Activations 

Memory Bandwidth depends upon the number of physically installed DIMMs and Activations.


Storage Bays\Backplanes:
------------------------
Each System Node supports 4x #EC5J 800GB NVMe U.2 SSD Modules

For the E980, these are considered Hot-Swap.

NVMe U.2 Modules are directly attached and are driven by an integrated PCIe (x4) controller.

There are no onboard SAS HDDs or SSDs.

All SAS Drives are atthed via #ESLS EXP24SX 24-Bay SAS 2.5-inch SFF-2 HDD/SSD Storage Enclosure.

E980 supports up to 4,032 Drives via a maximum of 168x #ESLS.
#ESLS in turn is attached via #EMX0 12x Slot PCIe3 I/O Expansion Drawers

Up to 4x #EMX0 I/O Drawers can be attached per System Node and each #EMX0 can support up to 16x #ESLS.
This should imply a max of 256x #ESLS but cabling and adapter restictions reduce this to 168.


I/O Slots:
----------
8x PCIe4 (x16) I/O Slots (C1 to C8) per System Node

For all I/O Slots:-
  Adapters are mounted in Blind Swap Cassettes (BSC)
  BSCs support Low Profile PCIe Adpater Cards
  All Adapter Slots are CAPI 2.0 Enabled
  All Slots are Hot-Swap compliant
  All Slots support PCIe to PCIe4

Each E980 System Node ships with a full set of BSCs.

Unpopulated BSCs are still required for proper air-flow.

There are no FCs for E980's BSCs.

The Server can automatically speed up fans to increase airflow across the I/O adapters if an adapter is known to require higher cooling levels.

External I/O:
Additional PCIe Adapters can be attached via #EMX0 12x Slot PCIe3 I/O Expansion Drawers
#EMX0 contains 4x PCIe3 (x16) Slots and  8x PCIe3 (x8) Slots

Each System Node supports up to 4x #EMX0


Power Supplies:
---------------

For the System Control Unit
...........................

The SCU is powered via the System Node(s) via UPIC Cables.

2x UPIC Cables are included in #EFCA CEC Interconnect Cables: Drawer #1 (P9)

In a Single Node Config, both UPIC Cables connect the SCU to the System Node.
In a Dual Node Config, 1x Cable connects the SCU to Node-1, the other connects to SCU Node-2.

Only the first 2x Nodes connenct to the SCU.

For the System Nodes
....................

Each System Node includes 4x Hot-Swap 1950W 200V-240V AC Power Supply Units (PSUs).

These provide N2 + N2 redundant power.

The System will function with just 2x working PSUs but a failed PSU must remain in the system until replaced.

Each PSU requires a Line Cord Conduit (Chunnel) which conveys power from the rear of the System Node to the Power Supplies in the front.

There are 2x Left Cords and 2x Right Cords.

These are included as part of the 5U Node Drawer.

Note: IBM SM refers to these as FC #EMXA.

We believe this is wrong.
As far as we can tell, these are the same A/C Power Chunnels used by those and other FCs but they do not have separate FCs in context for the E950.




CEC Interconnect Cabling:
-------------------------

E980 uses revised P9 Specific CEC Interconnect Cables and FCs

#EFCA CEC Interconnect Cable: Drawer #1 (P9)
#EFCB CEC Interconnect Cable: Drawer #2 (P9)
#EFCC CEC Interconnect Cable: Drawer #3 (P9)
#EFCD CEC Interconnect Cable: Drawer #4 (P9)


Power Distribution Units (PDUs):
--------------------------------
Older PDUs represented by FCs #7188, #7109, #7196 are functional but reduce the number of components that can be installed in a Rack.

IBM Manufacturing will integrate older PDUs for the 9040-M9S subject to available minimums.

It is strongly recommended that older #71xx PDUs be replaced with the newer High Function Intelligent PDUs offered via FCs #EPTJ, #EPTL, #EPTN, #EPTQ.

#EPTJ iPDU 200-240V 63A 1/3-Ph UTG-0247:  9x C19 Outlets
#EPTN iPDU 200-240V 63A 1/3-Ph UTG-0247: 12x C13 Outlets
#EPTL iPDU 208V 60A 3-Ph IEC-309 3P+G:  9x C19 Outlets
#EPTQ iPDU 208V 60A 3-Ph IEC-309 3P+G: 12x C13 Outlets


System Ports:
-------------
Each System Node contains 3x USB 3.0 Ports.
A USB Port in the 1st System Node is re-routed to the Front USB 3.0 Port on the SCU.

1x RJ45 System Port
4x 1GbE RJ45 HMC Ports in the SCU (2x per FSP)


Other:
------
E980 supports Up to 1,000 VMs (LPARs) per System


System Boot via:
----------------
  NVMe Drives
  HDD or SSD located in an EXP24SX or EXP12SX Drawer attached to a PCIe SAS Adapter
  Network LAN Adapters
  SAN Attached Fibre Channel or FCoE Adapters (requires #0837 Specify Code)
  External USB based DVD (Front Port)
  USB Memory Key/Flash Drive (Front Port)


Racks:
------
The E980 is designed to fit a standard 19-inch rack.

The recommended (default) Rack is #ECR0 7965-S42 (42U)
This Rack is optimized for modern Cabling and weight distrubtion.


The following older Racks are available but are not recommended due to cabling limitations.
 #0553 7014-T42 (42U) 2.0m
 #0551 7014-T00 (36U) 1.8m

For the 7014 Rack, the limitations are these:
------------------------------------------------
Cabling        | Reserve for      | Rack Space |
Implementation | Cabling          |  Net Loss  |
---------------|------------------|------------|
Overhead       | Top 2U           |     2U     |
Raised Floor   | Bottom 2U        |     2U     |
Both           | Top + Bottom  2U |     4U     |
------------------------------------------------

#ECR0 7965-S42 does not suffer from the same wasted Rack Space considerations.

Other Racks and OEM Racks should be checked first with IBM Services.

If a Rack is not required, #ER21 Factory-Deracking should be placed on the order.


#ECR0 Rack Front Door Options
-----------------------------
#ECRA Rack Acoustic Front Door for #ECR0 S42 Rack
#ECRF Rack Front Door (High-End Appearance) for #ECR0 S42 Rack
#ECRM Rack Front Door (Flat Black) for #ECR0 S42 Rack

#0553 Rack Front Door Options
-----------------------------
#EC08 Slim Front Acoustic Door  #0553 7014-T42 42U 2.0m 19-inch Rack
#ERG7 19-inch 2.0m Rack Front Door - High Perforation (Black)
#ERGD Rack Doors For 2.0m Ruggedized Rack
#6069 19-inch 2.0m Rack Optional Front Door (High Perforation)
#6272 19-inch 2.0m Rack Thin Profile Front Trim Kit (Black) New Order


Regardless of Rack used, SCU and System Node installation is specifically designed as a bottom-up installation.

The SCU is located below Sys Node-1, Sys Node-1 is below Sys-Node-2 and so on.

Rack Extension FCs
------------------
#ECRK 5-inch (12.7 cm) Rear Rack Extension 
#ERG0 8-inch (20.3 cm) Rear Rack Extension

These FCs provide space to hold cables on the side of the Rack and keep the center area clear for cooling and service access.

Any use of PCIe I/O Drawres or Exteranl SAS Drive Drawers  should be using the 8-inch extender.


Rack Lift Tool Kits:
--------------------
Nodes and Drawers can be heavy - up to 190 lbs, and require 3 of 4 people to safely install.

Altlernately, using IBM's Rack Lift Tool-Kit allows a Hand Crank Pulley Method to lift and position heavy Drawers.

The following Lift Tools are supported:
#EB2Z Service Lift Tool (19-inch Racks)
#EB3Z Service Lift Tool (Based on GenieLift GL-8 Standard)
#EB4Z Service Wedge Shelf Tool Kit for #EB3Z


Physical Specifications:
------------------------

System Control Unit
-------------------------------
Width:  | 446 mm  (17.5 in)   |
Depth:  | 780 mm  (30.7 in)   |
Height: |  86 mm  (3.4 in)    |
Weight: |  23 kg  (50 lb)     |
-------------------------------

System Node
-------------------------------
Width:  | 446 mm  (17.5 in)   |
Depth:  | 867 mm  (34.1 in)   |
Height: | 218 mm  (8.5 in)    |
Weight: | 86 kg   (190 lb)    |
-------------------------------


----------------------------------------------------
Temps:               |                             |
 Non-Operating       |  5C-45C (41F-113F)          |
 Recommended         | 18C-27C (64F-80F)           |
 Max Allowed         |  5C-40C (41F-104F)          |
---------------------|-----------------------------|
Operating Voltage    | 200V-240V A/C               |
Operating Frequency  | 50-60Hz +/-3Hz              |
---------------------|-----------------------------|
Power Consumption    | 4,130W Max (per Node)       |
Power Source Loading | 4.2kVA Max (per Node)       |
---------------------|-----------------------------|
Thermal Output       | 14.4K Btu/hr Max (per Node) |
Max Altitude         | 3,050m (10,000-ft)          |
---------------------|-----------------------------|
Noise levels         |                             |
 Typical Config      | 8.5 bels (Operating/Idle)   |
                     | 9.0 bels (Heavy Workload)   |
----------------------------------------------------


Primary O/S Specify Codes:
--------------------------
#2145 Primary O/S IBM i
#2146 Primary O/S AIX
#2147 Primary O/S Linux (for RHEL, SLES, Ubuntu)

Min O/S Levels:
---------------
AIX Version 7.1 w/ TL 7100-04 TL + SP 7100-04-07-1845
IBM i 7.2 TR9

RHEL 7.5 for Power LE (p8compat)
SLES 11 S4
VIOS 2.2.6.23
Java 7.1

...

Original Content via RISC Analysis © 2019