POWER9 9040-MR9 E950 via RISC Analysis DataBox : www.riscanalysis.com/databox/
9040-MR9 E950 2018/08/07

19-Inch Rack Mount (4U) Drawer


9040-MR9 E950 Processors
-----------------------------------------------------
                    | #EPWR | #EPWS | #EPWT | #EPWY |
                    |  3.6  |  3.4  |  3.15 |  3.2  |
                    |  3.8  |  3.8  |  3.8  |  3.8  |
--------------------|-------|-------|-------|-------|
CCIN                |  5C0D |  5C10 |  5C11 |       |
--------------------|-------|-------|-------|-------|
Cores/Proc          |    8  |   10  |   12  |   11  |
Min/Max Procs       |  2/4  |  2/4  |  2/4  |  2/4  |
Min Proc Activation |  8/16 | 10/20 | 12/24 | 11/22 |
--------------------|-------|-------|-------|-------|
Other Proc FCs      |       |       |       |       |
--------------------|-------|-------|-------|-------|
Healthcare Solution |       |       | #EHC4 |       |
--------------------|-------|-------|-------|-------|
Activation          |       |       |       |       |
--------------------|-------|-------|-------|-------|
1x Core             | #EPWV | #EPWW | #EPWX | #EPN3 |
1x Core (Linux)     | #ELBG | #ELBP | #ELBH | #ELBR |
1x Core Healthcare  |       |       | #ELAN |       |
--------------------|-------|-------|-------|-------|
Elastic/CoD/Billing |       |       |       |       |
--------------------|-------|-------|-------|-------|
Enablement          | #EM9U | #EM9U | #EM9U | #EM9U |
1x Proc Day         | #EPN0 | #EPN5 | #EPNK | #EPN8 |
100x Proc Days      | #EPN1 | #EPN6 | #EPNL | #EPN9 |
100x Proc Mins      | #EPN2 | #EPN7 | #EPNM | #EPNN |
-----------------------------------------------------

4x Socket System (up to 4x Procs) with Simultaneous Multithreading of up to 8x Threads per Core (SMT8).

Each Single Chip Module (SCM) delivers 230GB/sec of Memory Bandwidth per Processor Socket via 2x On-Chip Memory Controllers per SCM.

The Memory
Controllers utilize up to 128MB of Off-Chip eDRAM L4 Cache.

Each System requires a minimum of 2x Processors to be installed.
Processors are installed in Pairs.


The first Proc in a pair must have all Cores activated.
Mixing of different Processors on a System is not allowed.


Memory:
-------
Per IBM the System uses Industry Standard DDR4 DIMMs
However the DIMMs must be sourced via IBM.

IBM also state that these are same DIMMs as used in Power9 Scale-Out Servers.

-------------------------------------------------------
FCs   |  Capacity/Type                         | CCIN |
------|----------------------------------------|------|
#EM6A |   8GB DDR4 1600MHz ECC Memory L4 Cache |      |
#EM6B |  16GB DDR4 1600MHz ECC Memory L4 Cache |      |
#EM6C |  32GB DDR4 1600MHz ECC Memory L4 Cache |      |
#EM6D |  64GB DDR4 1600MHz ECC Memory L4 Cache |      |
#EM6E | 128GB DDR4 1600MHz ECC Memory L4 Cache |      |
------|----------------------------------------|------|
#EM03 | 16x Slot DDR4 DIMM Memory Riser Card   | 2C62 |
#EMEF | DDR4 Memory VRM                        | 51E1 |
------|----------------------------------------|------|
#EMAM | Active Memory Expansion (optional)     |      |
------|----------------------------------------|------|
#EMAP | 1GB Memory Activation DDR4 POWER9      |      |
#EMAQ | 100GB Memory Activation DDR4 POWER9    |      |
-------------------------------------------------------

Each installed Processor requires:
  1x #EMEF DDR4 Memory VRM
  1x #EM03 16x Slot DDR4 DIMM Memory Riser Card

Each Proc/VRM supports up to 2x #EM03 Memory Riser Cards
Each Memory Riser Card supports 16x DDR4 DIMM Memory Slots

----------------------------------------
              |   2x Proc |  4x Proc   |
--------------|-----------|------------|
Min/Max #EMEF |     2/2   |    4/4     |
Min/Max #EM03 |     2/4   |    4/8     |
--------------|-----------|------------|
Min/Max Slots |   32/64   |   64/128   |
Min/Max Mem   | 128GB/8TB | 256GB/16TB |
----------------------------------------

Memory is installed in Octets (8x identical DIMMS)
All DIMMs on a Riser Card must be the same.

Memory on the second Riser Card per Proc can be of different capacity to the first Riser Card.

However, for load balancing and optimization, IBM recommends that memory is the same across all Riser Cards and Procs.

Overall System memory bandwidth increases with more populated DIMM Slots.

Permanent Memory Activations:
Min Memory Activation via #EMAP or #EMAQ:
128GB or 50% (whichever is higher)

Temporary Memory Activations (for Memory not permanently activated):
#EM9U  90 Days Elastic CoD Memory Enablement
#EMJE   8 GB-Day billing for Elastic CoD Memory
#EMJF 800 GB-Day billing for Elastic CoD Memory

An HMC is required to enable CoD Memory

Important Note:
---------------
At the time of writing, IBM say these DIMMs are the same as the non CUoD PC DIMMs (#EM60-#EM65) and (#EM6G-#EM6M) with the same PNs and CCINs.

We have disputed this with IBM since we believe these are 'Centaur' DIMMs with L4 Cache.


I/O Slots:
----------

The number of available I/O Slots depends on the number of Processors installed
-----------------------------------------------
 9040-MR9 E950              |   Procs   | see |
                            |  2  |  4  | ref |
----------------------------|-----|-----|-----|
PCIe3  (x8) Slot C6         |  1  |  1  |  #1 |
----------------------------|-----|-----|-----|
PCIe4  (x8) Slots C9, C12   |  2  |  2  |  #2 |
----------------------------|-----|-----|-----|
PCIe4 (x16) Slots (CAPI)    |  4  |  8  |     |
     C7,C8, C10, C11        |  Y  |  Y  |     |
     C2-C5                  | N/A |  Y  |  #3 |
----------------------------|-----|-----|-----|
#EMX0 12x PCIe3 Exp Drawers |  2  |  4  |     |
-----------------------------------------------

All I/O Adapters attach via Blind Swap Cassettes and are Hot-Swap.
All Slots support Full Height, Half Length Adapter Cards.
All Slots are SR-IOV capable for Virtualization (Single-Root I/O Virtualization)

Note Refs:
----------
#1: Slot C6 is reserved for initial Ethernet LAN Adapter.

#2: Slot C12 must contain #EJ0K SAS RAID Adapter if #EJBB 8x Slot Backplanes Selected
    Slots C9 and C12  must each contain #EJ0K SAS RAID Adapter if 
    #EJSB 4+4 Slot Backplane Selected

(Dual Storage Adapter (twinned) Config is not supported - the Adapters are independent)

#3: Slots C2 to C5 are not available in Dual Proc Config

The Server can automatically speed up fans to increase airflow across the I/O adapters if an adapter is known to require higher cooling levels.

Additional PCIe Adapters can be attached via #EMX0 12x Slot PCIe3 I/O Expansion Drawers
#EMX0 contains 4x PCIe3 (x16) Slots and  8x PCIe3 (x8) Slots
A Dual Proc Config supports 2x #EXP0
A Quad Proc Config supports 4x #EXP0

#EMX0 will require #ERG0 8-inch Rear Rack Extension for 19-inch 42U 2.0m Racks
(RIO Cable Connections will stop the rear door from closing).

Per IBM RedBooks:
With 2x Procs:  7x PCIe Slots
     3x Procs:  9x PCIe Slots
     4x Procs: 11x PCIe Slots

We believe this to be wrong.


It is our understanding that the system supports only 2x or 4x Procs.


Backplanes and Drives:
----------------------

There are 3x Backplane choices.

#EJ0B Storage Backplane: 0x SAS SFF-3 HDD/SDD + 4x NVMe
#EJBB Storage Backplane: 8x SAS SFF-3 HDD/SSD + 4x NVMe
#EJSB Storage Backplane: 8x SAS SFF-3 HDD/SSD (Split 4+4) + 4x NVMe

All 3x FCs deliver the same cage structure and all report CCIN 2D37.

SAS Drives (HDD or SSD) are 2.5-inch SFF-3.
NVMe Drive Modules are 2.5-inch U.2, 4K Byte Sector Format

All SAS SFF-3 Drive Bays support Hot-Swap.

Each of these FCs has a minimum additional requirement
--------------------------------------------------------------
BP FC | Min Required DASD    | Required Controller | In Slot |
------|----------------------|---------------------|---------|
#EJ0B | 1x NVMe SSD Module   | 0x SAS Controller   |  N/A    |
#EJBB | 1x SAS SFF-3 HDD/SSD | 1x SAS Controller   |  C12    |
#EJSB |                      | 2x SAS Controllers  | C12, C9 |
--------------------------------------------------------------

Required SAS Controller: #EJ0K SAS RAID Adapter


Drive Support by Backplane
------------------------------------------------------------------
BP FC | NVMe |  SAS  | Minimum Drive Required                    |
------|------|-------|-------------------------------------------|
#EJ0B |   4  |   0   | #EC5J 800GB NVMe U.2 4K 2.5-in SSD Module |
#EJBB |   4  |   8   | #ESNK 300GB 15K RPM SAS SFF-3 HDD 4K      |
#EJSB |   4  | 4 + 4 |                                           |
------------------------------------------------------------------

For #EJB0: NVMe Modules are not driven by an SAS Controller.
For #EJBB: The SAS Controller drives all 8x SAS Drives.
For #EJSB: Each Controller supports max 4x Drives.

HDD or SSD drives can be attached through EXP24SX or EXP12SX expansion drawers.

The external USB DVD plugs into one of the two front USB ports.

External Storage is supported via #ESLL/#ELLL EXP12SX and #ESLS/#ELLS EXP24SX Storage Enclosures.

(for FC ordering purposes, only #ESLL and #ESLS are used)

Up to 64x Enclosures are supported (1,536 Drive Slots).


Internal RAID:
--------------
#EJBB and EJSB Backplanes support RAID-0, -5, -6, -10:

----------------------------------
 RAID Level | -0 | -5 | -6 | -10 |
------------|----|----|----|-----|
 Min Drives |  2 |  3 |  4 |   2 |
 Same Size  |  Y |  Y |  Y |   N |
------------|----|----|----|-----|
 Hot Swap   |  N |  Y |  Y |   Y |
----------------------------------

All 3x Backplanes are supported by AIX, Linux, VIOS

Field changes to installed Backplane types are supported but require Server downtime.


Power Supplies:
---------------
The System requires 4x #EB3M 2000W 200-240V AC Power Supplies

This is for N2 +2 Redundancy

In the event of a Power Supply failure, Power Supplies can be Hot-Swapped.


Racks:
------
The 9040-MR9 E950 Server is designed to fit a standard 19-inch Rack which is ordered via FC.

Some MTMD based IBM Racks are supported for Field Integration.

 #ECR0 7965-S42 (42U) 2.0m
 #0553 7014-T42 (42U) 2.0m
       7014-T00 (36U) 1.8m (Field Only-No Orderable FC)

Non-IBM Racks should be checked first with IBM Services.

The E950 has adjustable Rails can accommodate Rack depths of 22.75 inches to 30.5 inches.


Rack Related FCs:
-----------------
#6249 19-inch 2.0m Rack Acoustic Doors (Front and Rear)
#ERG7 19-inch 2.0m Rack Front Door - High Perforation (Black)
#6069 19-inch 2.0m Rack Optional Front Door (High Perforation)
#6272 19-inch 2.0m Rack Thin Profile Front Trim Kit (Black)


Power Distribution Units (PDUs):
--------------------------------
Older PDUs represented by FCs #7188, #7109, #7196 are functional but reduce the number of components that can be installed in a Rack.

Therefore, IBM Manufacturing will not integrate the older PDUs for the 9040-MR9.

It is strongly recommended that older #71xx PDUs be replaced with the newer High Function Intelligent PDUs offered via FCs #EPTJ, #EPTL, #EPTN, #EPTQ.

#EPTJ iPDU 200-240V 63A 1/3-Ph UTG-0247:  9x C19 Outlets
#EPTN iPDU 200-240V 63A 1/3-Ph UTG-0247: 12x C13 Outlets
#EPTL iPDU 208V 60A 3-Ph IEC-309 3P+G:  9x C19 Outlets
#EPTQ iPDU 208V 60A 3-Ph IEC-309 3P+G: 12x C13 Outlets


System Ports:
-------------
2x Front USB 3.0 Ports
2x Rear USB 3.0 Ports (limited use)
2x 1GbE RJ45 HMC Ports
1x RJ45 System Port


System Boot via:
----------------
  NVMe Drives
  Internal SAS Drives
  HDD or SSD located in an EXP24SX or EXP12SX Drawer attached to a PCIe SAS Adapter
  Network LAN Adapters
  SAN Attached Fibre Channel or FCoE Adapters (requires #0837 Specify Code)
  External USB based DVD (Front Port)
  USB Memory Key/Flash Drive (Front Port)


Multimedia Drawers:
-------------------
7226-1U3, 7216-1U2, 7214-1U2
Multimedia Drawer support DVDs, Tape Drives, RDX Docking Stations
Up to 6x Multimedia Drawers can be attached.


Physical Specifications:
-------------------------------
Width:  | 448 mm  (17.5 in)   |
Depth:  | 902 mm  (35.5 in)   |
Height: | 175 mm  (6.9 in)    |
Weight: | 69 kg   (152 lb)    |
-------------------------------

------------------------------------------------------------------------------
Temps:               |                                                       |
 Non-Operating       |  5C-45C (41F-113F)                                    |
 Recommended         | 18C-27C (64F-80F)                                     |
 Max Allowed         | 10C-40C (50F-104F)                                    |
---------------------|-------------------------------------------------------|
Operating Voltage    | 200V-240V A/C                                         |
Operating Frequency  | 50-60Hz +/-3Hz                                        |
---------------------|-------------------------------------------------------|
Power Consumption    | 3,850W Max (per System Node)                          |
Power Source Loading | 3.9kVA Max (per System Node)                          |
---------------------|-------------------------------------------------------|
Thermal Output       | 14.4K Btu/hr Max (per System Node)                    |
Max Altitude         | 3,050m (10,000-ft)                                    |
---------------------|-------------------------------------------------------|
Noise levels         |                                                       |
 Typical Config      | 7.4 bels (Operating/Idle) (4x 12-Core w/2.0TB Memory) |
 Max Config          | 8.1 bels (Heavy Workload) (4x 12-Core w/2.0TB Memory) |
------------------------------------------------------------------------------


Primary O/S Specify Codes:
--------------------------
#2146 Primary O/S AIX
#2147 Primary O/S Linux (for RHEL, SLES, Ubuntu)


Min O/S Levels:
---------------
AIX Version 7.1 w/ TL 7100-04 TL + SP 7100-04-07-1845
RHEL 7.5 for Power LE (p8compat)
SLES 11 S4
VIOS 2.2.6.23
Java 7.1

...

Original Content via RISC Analysis © 2019