Chapter 4


Procedure Interfaces

This chapter explains procedure interfaces for Fortran and C defined in the MPI 3.1.

The description of the interfaces for C provides types, names, and arguments of functions using the function prototype. The names of MPI functions have the prefix MPI_ followed by one uppercase character and then lowercase characters. The return code of an MPI function is returned as a function value of type int.

The description of the interfaces for Fortran provides names, arguments, and types of arguments of procedures. The return code of an MPI procedure is returned as the last argument ierr of type INTEGER.

It is possible to check whether an invocation of an MPI procedure is successful or not by comparing the value of the return code with the values of error codes, which are defined as reserved names in the MPI standard. The error codes are listed in the table.

Each argument is marked with the strings IN, OUT, or INOUT. Their meanings are as follows:

Table 4-1   Meanings of IN, OUT, and INOUT

String Meaning
IN The argument can be referenced but not updated.
OUT The argument can be updated.
INOUT The argument can be referenced and updated.
It can be referenced on some processes and updated on other processes.


For detailed information on MPI constants that can be specified as arguments to each MPI procedure and others, please refer to the MPI standard documentation.

The MPI procedures are classified as follows:



4.1   Point-to-Point Communication

MPI_Bsend/MPI_BSEND Performs blocking transmission in buffer mode
MPI_Bsend_init/MPI_BSEND_INIT Generates a communication request for buffer-mode transmission
MPI_Buffer_attach/MPI_BUFFER_ATTACH Sets the communication buffer for buffer-mode transmission
MPI_Buffer_detach/MPI_BUFFER_DETACH Releases the communication buffer for buffer-mode transmission
MPI_Cancel/MPI_CANCEL Cancels a nonblocking communication that is in hold state
MPI_Get_count/MPI_GET_COUNT Obtains the number of elements of data types from received message information
MPI_Ibsend/MPI_IBSEND Performs nonblocking transmission in buffer mode
MPI_Improbe/MPI_IMPROBE Checks and matches a communication message that has arrived
MPI_Imrecv/MPI_IMRECV Performs a nonblocking receive of the matched message
MPI_Iprobe/MPI_IPROBE Checks a communication message that has arrived
MPI_Irecv/MPI_IRECV Performs a nonblocking receive
MPI_Irsend/MPI_IRSEND Performs nonblocking transmission in ready mode
MPI_Isend/MPI_ISEND Performs nonblocking transmission in standard mode
MPI_Issend/MPI_ISSEND Performs nonblocking transmission in synchronous mode
MPI_Mprobe/MPI_MPROBE Checks and matches a communication message that has arrived
MPI_Mrecv/MPI_MRECV Performs blocking receive of the matched message
MPI_Probe/MPI_PROBE Checks a message that has arrived
MPI_Recv/MPI_RECV Performs a blocking receive
MPI_Recv_init/MPI_RECV_INIT Generates a communication request for receiving a message
MPI_Request_free/MPI_REQUEST_FREE Releases the communication request
MPI_Request_get_status/MPI_REQUEST_GET_STATUS Returns the status of a request without deallocating or deactivating it
MPI_Rsend/MPI_RSEND Performs blocking transmission in ready mode
MPI_Rsend_init/MPI_RSEND_INIT Generates a communication request for ready-mode transmission
MPI_Send/MPI_SEND Performs blocking transmission in standard mode transmission
MPI_Send_init/MPI_SEND_INIT Generates a communication request for standard-mode transmission
MPI_Sendrecv/MPI_SENDRECV Performs blocking send/receive
MPI_Sendrecv_replace/MPI_SENDRECV_REPLACE Performs blocking send/receive using the same send/receive buffer
MPI_Ssend/MPI_SSEND Performs blocking transmission in synchronous mode
MPI_Ssend_init/MPI_SSEND_INIT Generates a communication request for synchronous mode transmission
MPI_Start/MPI_START Requests start of communication according to a communication request
MPI_Startall/MPI_STARTALL Requests start of communication according to communication requests
MPI_Test/MPI_TEST Performs nonblocking communication completion test
MPI_Test_cancelled/MPI_TEST_CANCELLED Confirms the cancellation of communication
MPI_Testall/MPI_TESTALL Performs completion test on all of the one or more nonblocking communication
MPI_Testany/MPI_TESTANY Performs a completion test on one or more nonblocking communication
MPI_Testsome/MPI_TESTSOME Performs at least one completion test on one or more nonblocking communication
MPI_Wait/MPI_WAIT Waits for completion of nonblocking communication
MPI_Waitall/MPI_WAITALL Waits for complication of all of the one or more nonblocking communication
MPI_Waitany/MPI_WAITANY Waits for completion of any of the one or more nonblocking communication
MPI_Waitsome/MPI_WAITSOME Waits for completion of at least one of the one or more nonblocking communication



MPI_Bsend (C) MPI_BSEND (Fortran)

MPI_Bsend is a basic send using user-specified buffering.

#include <mpi.h>
int MPI_Bsend(void *buf, int count, MPI_Datatype datatype,
              int dest, int tag, MPI_Comm comm)
use mpi
CALL MPI_BSEND(buf, count, datatype, dest, tag, comm, ierr)
<arbitrary> :: buf(*)
INTEGER     :: count, datatype, dest, tag, comm, ierr
buf IN Initial address of send buffer (choice).
count IN Number of elements in send buffer (nonnegative integer).
datatype IN Datatype of each send buffer element (handle).
dest IN Rank of destination (integer).
tag IN Message tag (integer).
comm IN Communicator (handle).
ierr OUT Return code (Fortran only).



MPI_Bsend_init (C) MPI_BSEND_INIT (Fortran)

MPI_Bsend_init builds a handle for a buffered send.

#include <mpi.h>
int MPI_Bsend_init(void *buf, int count, MPI_Datatype datatype,
                   int dest, int tag, MPI_Comm comm,
                   MPI_Request *request)
use mpi
CALL MPI_BSEND_INIT(buf, count, datatype, dest, tag, comm,
                    request, ierr)
<arbitrary> :: buf(*)
INTEGER     :: count, datatype, dest, tag, comm, request, ierr
buf IN Initial address of send buffer (choice).
count IN Number of elements in send buffer (nonnegative integer).
datatype IN Datatype of each send buffer element (handle).
dest IN Rank of destination (integer).
tag IN Message tag (integer).
comm IN Communicator (handle).
request OUT Communication request (handle).
ierr OUT Return code (Fortran only).


NOTES



MPI_Buffer_attach (C) MPI_BUFFER_ATTACH (Fortran)

MPI_Buffer_attach attaches a user-defined buffer for sending.

#include <mpi.h>
int MPI_Buffer_attach(void *buffer, int size)
use mpi
CALL MPI_BUFFER_ATTACH(buffer, size, ierr)
<arbitrary> :: buffer(*)
INTEGER     :: size, ierr
buffer IN Initial buffer address (choice).
size IN Buffer size in bytes (integer).
ierr OUT Return code (Fortran only).



MPI_Buffer_detach (C) MPI_BUFFER_DETACH (Fortran)

MPI_Buffer_detach removes an existing buffer.

#include <mpi.h>
int MPI_Buffer_detach(void *bufferptr, int *size)
use mpi
CALL MPI_BUFFER_DETACH(bufferptr, size, ierr)
<arbitrary> :: bufferptr(*)
INTEGER     :: size, ierr
bufferptr OUT Initial buffer address (choice).
size OUT Buffer size in bytes (integer).
ierr OUT Return code (Fortran only).



MPI_Cancel (C) MPI_CANCEL (Fortran)

MPI_Cancel cancels a communication request.

#include <mpi.h>
int MPI_Cancel(MPI_Request *request)
use mpi
CALL MPI_CANCEL(request, ierr)
INTEGER     :: request, ierr
request IN Communication request (handle).
ierr OUT Return code (Fortran only).


NOTES



MPI_Get_count (C) MPI_GET_COUNT (Fortran)

MPI_Get_count returns the number of "top level" elements.

#include <mpi.h>
int MPI_Get_count(MPI_Status *status, MPI_Datatype datatype,
                  int *count)
use mpi
CALL MPI_GET_COUNT(status, datatype, count, ierr)
INTEGER     :: status(MPI_STATUS_SIZE), datatype, count, ierr
status IN Return status of receive operation (status).
datatype IN Datatype of each receive buffer element (handle).
count OUT Number of received elements (integer).
ierr OUT Return code (Fortran only).


NOTE



MPI_Ibsend (C) MPI_IBSEND (Fortran)

MPI_Ibsend starts a nonblocking buffered send.

#include <mpi.h>
int MPI_Ibsend(void *buf, int count, MPI_Datatype datatype,
               int dest, int tag, MPI_Comm comm,
               MPI_Request *request)
use mpi
CALL MPI_IBSEND(buf, count, datatype, dest, tag, comm, request,
                ierr)
<arbitrary> :: buf(*)
INTEGER     :: count, datatype, dest, tag, comm, request, ierr
buf IN Initial address of send buffer (choice).
count IN Number of elements in send buffer (integer).
datatype IN Datatype of each send buffer element (handle).
dest IN Rank of destination (integer).
tag IN Message tag (integer).
comm IN Communicator (handle).
request OUT Communication request (handle).
ierr OUT Return code (Fortran only).



MPI_Improbe (C) MPI_IMPROBE (Fortran)

MPI_Improbe checks and matches a communication message that has arrived.

#include <mpi.h>
int MPI_Improbe(int source, int tag, int *flag, MPI_Comm comm,
                MPI_Message *message, MPI_Status *status)
use mpi
CALL MPI_IMPROBE(source, tag, comm, flag, status, ierr)
LOGICAL     :: flag
INTEGER     :: source, tag, comm, message, status(MPI_STATUS_SIZE), ierr
source IN Source rank or MPI_ANY_SOURCE (integer).
tag IN Tag value or MPI_ANY_TAG (integer).
comm IN Communicator (handle).
flag OUT (logical)
message OUT Returned message (handle).
status OUT Status object (status).
ierr OUT Return code (Fortran only).



MPI_Imrecv (C) MPI_IMRECV (Fortran)

MPI_Imrecv begins a nonblocking receive of the matched message.

#include <mpi.h>
int MPI_Imrecv(void *buf, int count, MPI_Datatype datatype,
               MPI_Message *message, MPI_Request *request)
use mpi
CALL MPI_IMRECV(buf, count, datatype, message, request, ierr)
<arbitrary> :: buf(*)
INTEGER     :: count, datatype, message, request,
               ierr
buf IN Initial address of receive buffer (choice).
count IN Number of elements in receive buffer (integer).
datatype IN Datatype of each receive buffer element (handle).
message INOUT Message (handle).
request OUT Communication request (handle).
ierr OUT Return code (Fortran only).



MPI_Iprobe (C) MPI_IPROBE (Fortran)

MPI_Iprobe is a nonblocking test for a message.

#include <mpi.h>
int MPI_Iprobe(int source, int tag, int *flag, MPI_Comm comm,
               MPI_Status *status)
use mpi
CALL MPI_IPROBE(source, tag, comm, flag, status, ierr)
LOGICAL     :: flag
INTEGER     :: source, tag, comm, status(MPI_STATUS_SIZE), ierr
source IN Source rank or MPI_ANY_SOURCE (integer).
tag IN Tag value or MPI_ANY_TAG (integer).
comm IN Communicator (handle).
flag OUT (logical)
status OUT Status object (status).
ierr OUT Return code (Fortran only).


NOTES



MPI_Irecv (C) MPI_IRECV (Fortran)

MPI_Irecv begins a nonblocking receive.

#include <mpi.h>
int MPI_Irecv(void *buf, int count, MPI_Datatype datatype,
              int source, int tag, MPI_Comm comm,
              MPI_Request *request)
use mpi
CALL MPI_IRECV(buf, count, datatype, source, tag, comm, request,
               ierr)
<arbitrary> :: buf(*)
INTEGER     :: count, datatype, source, tag, comm, request,
               ierr
buf IN Initial address of receive buffer (choice).
count IN Number of elements in receive buffer (integer).
datatype IN Datatype of each receive buffer element (handle).
source IN Rank of source (integer).
tag IN Message tag (integer).
comm IN Communicator (handle).
request OUT Communication request (handle).
ierr OUT Return code (Fortran only).



MPI_Irsend (C) MPI_IRSEND (Fortran)

MPI_Irsend begins a nonblocking ready send.

#include <mpi.h>
int MPI_Irsend(void *buf, int count, MPI_Datatype datatype,
               int dest, int tag, MPI_Comm comm,
               MPI_Request *request)
use mpi
CALL MPI_IRSEND(buf, count, datatype, dest, tag, comm, request,
                ierr)
<arbitrary> :: buf(*)
INTEGER     :: count, datatype, dest, tag, comm, request,
               ierr
buf IN Initial address of send buffer (choice).
count IN Number of elements in send buffer (integer).
datatype IN Datatype of each send buffer element (handle).
dest IN Rank of destination (integer).
tag IN Message tag (integer).
comm IN Communicator (handle).
request OUT Communication request (handle).
ierr OUT Return code (Fortran only).



MPI_Isend (C) MPI_ISEND (Fortran)

MPI_Isend begins a nonblocking send.

#include <mpi.h>
int MPI_Isend(void *buf, int count, MPI_Datatype datatype,
              int dest, int tag, MPI_Comm comm,
              MPI_Request *request)
use mpi
CALL MPI_ISEND(buf, count, datatype, dest, tag, comm, request,
               ierr)
<arbitrary> :: buf(*)
INTEGER     :: count, datatype, dest, tag, comm, request,
               ierr
buf IN Initial address of send buffer (choice).
count IN Number of elements in send buffer (integer).
datatype IN Datatype of each send buffer element (handle).
dest IN Rank of destination (integer).
tag IN Message tag (integer).
comm IN Communicator (handle).
request OUT Communication request (handle).
ierr OUT Return code (Fortran only).



MPI_Issend (C) MPI_ISSEND (Fortran)

MPI_Issend begins a nonblocking synchronous send.

#include <mpi.h>
int MPI_Issend(void *buf, int count, MPI_Datatype datatype,
               int dest, int tag, MPI_Comm comm,
               MPI_Request *request)
use mpi
CALL MPI_ISSEND(buf, count, datatype, dest, tag, comm, request,
                ierr)
<arbitrary> :: buf(*)
INTEGER     :: count, datatype, dest, tag, comm, request,
               ierr
buf IN Initial address of send buffer (choice).
count IN Number of elements in send buffer (integer).
datatype IN Datatype of each send buffer element (handle).
dest IN Rank of destination (integer).
tag IN Message tag (integer).
comm IN Communicator (handle).
request OUT Communication request (handle).
ierr OUT Return code (Fortran only).



MPI_Mprobe (C) MPI_MPROBE (Fortran)

MPI_Mprobe checks and matches a communication message that has arrived.

#include <mpi.h>
int MPI_Mprobe(int source, int tag, int *flag, MPI_Comm comm,
               MPI_Message *message, MPI_Status *status)
use mpi
CALL MPI_IMPROBE(source, tag, comm, status, ierr)
INTEGER     :: source, tag, comm, message, status(MPI_STATUS_SIZE), ierr
source IN Source rank or MPI_ANY_SOURCE (integer).
tag IN Tag value or MPI_ANY_TAG (integer).
comm IN Communicator (handle).
message OUT Returned message (handle).
status OUT Status object (status).
ierr OUT Return code (Fortran only).



MPI_Mrecv (C) MPI_MRECV (Fortran)

MPI_Mrecv begins a blocking receive of the matched message.

#include <mpi.h>
int MPI_Imrecv(void *buf, int count, MPI_Datatype datatype,
               int source, int tag, MPI_Comm comm,
               MPI_Message *message, MPI_Request *request)
use mpi
CALL MPI_IMRECV(buf, count, datatype, message, status, ierr)
<arbitrary> :: buf(*)
INTEGER     :: count, datatype, message, status(MPI_STATUS_SIZE), ierr
buf IN Initial address of receive buffer (choice).
count IN Number of elements in receive buffer (integer).
datatype IN Datatype of each receive buffer element (handle).
message INOUT Message (handle).
status OUT Status object (Status).
ierr OUT Return code (Fortran only).



MPI_Probe (C) MPI_PROBE (Fortran)

MPI_Probe is a blocking test for a message.

#include <mpi.h>
int MPI_Probe(int source, int tag, MPI_Comm comm,
              MPI_Status *status)
use mpi
CALL MPI_PROBE(source, tag, comm, status, ierr)
INTEGER     :: source, tag, comm, status(MPI_STATUS_SIZE), ierr
source IN Source rank or MPI_ANY_SOURCE (integer).
tag IN Tag value or MPI_ANY_TAG (integer).
comm IN Communicator (handle).
status OUT Status object (status).
ierr OUT Return code (Fortran only).

NOTES



MPI_Recv (C) MPI_RECV (Fortran)

MPI_Recv performs a basic receive.

#include <mpi.h>
int MPI_Recv(void *buf, int count, MPI_Datatype datatype,
             int source, int tag, MPI_Comm comm,
             MPI_Status *status)
use mpi
CALL MPI_RECV(buf, count, datatype, source, tag, comm, status,
              ierr)
<arbitrary> :: buf(*)
INTEGER     :: count, datatype, source, tag, comm, status(MPI_STATUS_SIZE),
               ierr
buf OUT Initial address of receive buffer (choice).
count IN Number of elements in receive buffer (integer).
datatype IN Datatype of each receive buffer element (handle).
source IN Rank of source (integer).
tag IN Message tag (integer).
comm IN Communicator (handle).
status OUT Status object (status).
ierr OUT Return code (Fortran only).



MPI_Recv_init (C) MPI_RECV_INIT (Fortran)

MPI_Recv_init builds a handle for a receive.

#include <mpi.h>
int MPI_Recv_init(void *buf, int count, MPI_Datatype datatype,
                  int source, int tag, MPI_Comm comm,
                  MPI_Request *request)
use mpi
CALL MPI_RECV_INIT(buf, count, datatype, source, tag, comm,
                   request, ierr)
<arbitrary> :: buf(*)
INTEGER     :: count, datatype, source, tag, comm, request,
               ierr
buf OUT Initial address of receive buffer (choice).
count IN Number of elements in receive buffer (integer).
datatype IN Datatype of each receive buffer element (handle).
source IN Rank of source or MPI_ANY_SOURCE (integer).
tag IN Message tag or MPI_ANY_TAG (integer).
comm IN Communicator (handle).
request OUT Communication request (handle).
ierr OUT Return code (Fortran only).


NOTES



MPI_Request_free (C) MPI_REQUEST_FREE (Fortran)

MPI_Request_free frees a communication request object.

#include <mpi.h>
int MPI_Request_free(MPI_Request *request)
use mpi
CALL MPI_REQUEST_FREE(request, ierr)
INTEGER     :: request, ierr
request INOUT Communication request (handle).
ierr OUT Return code (Fortran only).



MPI_Request_get_status (C) MPI_REQUEST_GET_STATUS (Fortran)

MPI_Request_get_status returns the status of a request without deallocating or deactivating it.

#include <mpi.h>
int MPI_Request_get_status(MPI_Request request, int *flag,
                           MPI_Status *status)
use mpi
CALL MPI_REQUEST_GET_STATUS(request, flag, status, ierr)
LOGICAL     :: flag
INTEGER     :: request, status(MPI_STATUS_SIZE), ierr
request IN Request (handle).
flag OUT True if the operation is complete, otherwise false (logical).
status OUT Status object if flag is true (status).
ierr OUT Return code (Fortran only).



MPI_Rsend (C) MPI_RSEND (Fortran)

MPI_Rsend performs a basic ready send.

#include <mpi.h>
int MPI_Rsend(void *buf, int count, MPI_Datatype datatype,
              int dest, int tag, MPI_Comm comm)
use mpi
CALL MPI_RSEND(buf, count, datatype, dest, tag, comm, ierr)
<arbitrary> :: buf(*)
INTEGER     :: count, datatype, dest, tag, comm, ierr
buf IN Initial address of send buffer (choice).
count IN Number of elements in send buffer (nonnegative integer).
datatype IN Datatype of each send buffer element (handle).
dest IN Rank of destination (integer).
tag IN Message tag (integer).
comm IN Communicator (handle).
ierr OUT Return code (Fortran only).



MPI_Rsend_init (C) MPI_RSEND_INIT (Fortran)

MPI_Rsend_init builds a handle for a ready send.

#include <mpi.h>
int MPI_Rsend_init(void *buf, int count, MPI_Datatype datatype,
                   int dest, int tag, MPI_Comm comm,
                   MPI_Request *request)
use mpi
CALL MPI_RSEND_INIT(buf, count, datatype, dest, tag, comm,
                    request, ierr)
<arbitrary> :: buf(*)
INTEGER     :: count, datatype, dest, tag, comm, request,
               ierr
buf OUT Initial address of send buffer (choice).
count IN Number of elements in send buffer (integer).
datatype IN Datatype of each send buffer element (handle).
dest IN Rank of destination (integer).
tag IN Message tag (integer).
comm IN Communicator (handle).
request OUT Communication request (handle).
ierr OUT Return code (Fortran only).


NOTES



MPI_Send (C) MPI_SEND (Fortran)

MPI_Send performs a basic send.

#include <mpi.h>
int MPI_Send(void *buf, int count, MPI_Datatype datatype,
             int dest, int tag, MPI_Comm comm)
use mpi
CALL MPI_SEND(buf, count, datatype, dest, tag, comm, ierr)
<arbitrary> :: buf(*)
INTEGER     :: count, datatype, dest, tag, comm, ierr
buf IN Initial address of send buffer (choice).
count IN Number of elements in send buffer (nonnegative integer).
datatype IN Datatype of each send buffer element (handle).
dest IN Rank of destination (integer).
tag IN Message tag (integer).
comm IN Communicator (handle).
ierr OUT Return code (Fortran only).



MPI_Send_init (C) MPI_SEND_INIT (Fortran)

MPI_Send_init builds a handle for a send.

#include <mpi.h>
int MPI_Send_init(void *buf, int count, MPI_Datatype datatype,
                  int dest, int tag, MPI_Comm comm,
                  MPI_Request *request)
use mpi
CALL MPI_SEND_INIT(buf, count, datatype, dest, tag, comm,
                   request, ierr)
<arbitrary> :: buf(*)
INTEGER     :: count, datatype, dest, tag, comm, request,
               ierr
buf IN Initial address of send buffer (choice).
count IN Number of elements in send buffer (integer).
datatype IN Datatype of each send buffer element (handle).
dest IN Rank of destination (integer).
tag IN Message tag (integer).
comm IN Communicator (handle).
request OUT Communication request (handle).
ierr OUT Return code (Fortran only).


NOTES



MPI_Sendrecv (C) MPI_SENDRECV (Fortran)

MPI_Sendrecv sends and receives a message.

#include <mpi.h>
int MPI_Sendrecv(void *sendbuf, int sendcount,
                 MPI_Datatype sendtype, int dest, int sendtag,
                 void *recvbuf, int recvcount, MPI_Datatype recvtype,
                 int source, int recvtag, MPI_Comm comm,
                 MPI_Status *status)
use mpi
CALL MPI_SENDRECV(sendbuf, sendcount, sendtype, dest, sendtag,
                  recvbuf, recvcount, recvtype, source, recvtag,
                  comm, status, ierr)
<arbitrary> :: sendbuf(*), recvbuf(*)
INTEGER     :: sendcount, sendtype, dest, sendtag, recvcount,
               recvtype, source, recvtag, comm, status(MPI_STATUS_SIZE), ierr
sendbuf IN Initial address of send buffer (choice).
sendcount IN Number of elements in send buffer (integer).
sendtype IN Datatype of each send buffer element (handle).
dest IN Rank of destination (integer).
sendtag IN Send message tag (integer).
recvbuf OUT Initial address of receive buffer (choice).
recvcount IN Number of elements in receive buffer (integer).
recvtype IN Datatype of each receive buffer element (handle).
source IN Rank of source (integer).
recvtag IN Receive message tag (integer).
comm IN Communicator (handle).
status OUT Status object referring to receive operation (status).
ierr OUT Return code (Fortran only).



MPI_Sendrecv_replace (C) MPI_SENDRECV_REPLACE (Fortran)

MPI_Sendrecv_replace sends and receives a message using a single buffer.

#include <mpi.h>
int MPI_Sendrecv_replace(void *buf, int count,
                         MPI_Datatype datatype, int dest, int sendtag,
                         int source, int recvtag, MPI_Comm comm,
                         MPI_Status *status)
use mpi
CALL MPI_SENDRECV_REPLACE(buf, count, datatype, dest, sendtag,
                          source, recvtag, comm, status, ierr)
<arbitrary> :: buf(*)
INTEGER     :: count, datatype, dest, sendtag, source, recvtag,
               comm, status(MPI_STATUS_SIZE), ierr
buf INOUT Initial address of send and receive buffer (choice).
count IN Number of elements in send and receive buffer (integer).
datatype IN Datatype of each send and receive buffer element (handle).
dest IN Rank of destination (integer).
sendtag IN Send message tag (integer).
source IN Rank of source (integer).
recvtag IN Receive message tag (integer).
comm IN Communicator (handle).
status OUT Status object (status).
ierr OUT Return code (Fortran only).



MPI_Ssend (C) MPI_SSEND (Fortran)

MPI_Ssend performs a basic synchronous send.

#include <mpi.h>
int MPI_Ssend(void *buf, int count, MPI_Datatype datatype,
              int dest, int tag, MPI_Comm comm)
use mpi
CALL MPI_SSEND(buf, count, datatype, dest, tag, comm, ierr)
<arbitrary> :: buf(*)
INTEGER     :: count, datatype, dest, tag, comm, ierr
buf IN Initial address of send buffer (choice).
count IN Number of elements in send buffer (integer).
datatype IN Datatype of each send buffer element (handle).
dest IN Rank of destination (integer).
tag IN Message tag (integer).
comm IN Communicator (handle).
ierr OUT Return code (Fortran only).



MPI_Ssend_init (C) MPI_SSEND_INIT (Fortran)

MPI_Ssend_init builds a handle for a synchronous send.

#include <mpi.h>
int MPI_Ssend_init(void *buf, int count, MPI_Datatype datatype,
                   int dest, int tag, MPI_Comm comm,
                   MPI_Request *request)
use mpi
CALL MPI_SSEND_INIT(buf, count, datatype, dest, tag, comm,
                    request, ierr)
<arbitrary> :: buf(*)
INTEGER     :: count, datatype, dest, tag, comm, request, ierr
buf IN Initial address of send buffer (choice).
count IN Number of elements in send buffer (integer).
datatype IN Datatype of each send buffer element (handle).
dest IN Rank of destination (integer).
tag IN Message tag (integer).
comm IN Communicator (handle).
request OUT Communication request (handle).
ierr OUT Return code (Fortran only).


NOTES



MPI_Start (C) MPI_START (Fortran)

MPI_Start initiates communication with a persistent request handle.

#include <mpi.h>
int MPI_Start(MPI_Request *request)
use mpi
CALL MPI_START(request, ierr)
INTEGER     :: request, ierr
request INOUT Communication request (handle).
ierr OUT Return code (Fortran only).


NOTES

    The procedure begins a nonblocking communication using a communication request that was generated by the MPI_SEND_INIT, MPI_SSEND_INIT, MPI_BSEND_INIT, MPI_RSEND_INIT, and MPI_RECV_INIT procedures.

    Procedures such as MPI_WAIT and MPI_TEST can be used to confirm the completion of transmission.



MPI_Startall (C) MPI_STARTALL (Fortran)

MPI_Start_all starts a collection of requests.

#include <mpi.h>
int MPI_Start_all(int count, MPI_Request *array_of_requests)
use mpi
CALL MPI_START_ALL(count, array_of_requests, ierr)
INTEGER     :: count, array_of_requests(*), ierr
count IN List length (integer).
array_of_requests INOUT Array of requests (array of handles).
ierr OUT Return code (Fortran only).


NOTES

    The procedure begins a nonblocking communication using communication requests that were generated by the MPI_SEND_INIT, MPI_SSEND_INIT, MPI_BSEND_INIT, MPI_RSEND_INIT, and MPI_RECV_INIT procedures.

     Procedures such as MPI_WAIT and MPI_TEST can be used to confirm the completion of transmission.



MPI_Test (C) MPI_TEST (Fortran)

MPI_Test determines whether a send or receive is complete.

#include <mpi.h>
int MPI_Test(MPI_Request *request, int *flag, MPI_Status *status)
use mpi
CALL MPI_TEST(request, flag, status, ierr)
LOGICAL     :: flag
INTEGER     :: request, status(MPI_STATUS_SIZE), ierr
request INOUT Communication request (handle).
flag OUT True if operation is complete, otherwise false (logical).
status OUT Status object (status).
ierr OUT Return code (Fortran only).



MPI_Test_cancelled (C) MPI_TEST_CANCELLED (Fortran)

MPI_Test_cancelled determines whether a request is cancelled.

#include <mpi.h>
int MPI_Test_cancelled(MPI_Status *status, int *flag)
use mpi
CALL MPI_TEST_CANCELLED(status, flag, ierr)
LOGICAL     :: flag
INTEGER     :: status(MPI_STATUS_SIZE), ierr
status IN Status object (status).
flag OUT (logical)
ierr OUT Return code (Fortran only).



MPI_Testall (C) MPI_TESTALL (Fortran)

MPI_Testall determines whether all previously initiated communication are complete.

#include <mpi.h>
int MPI_Testall(int count, MPI_Request *array_of_requests,
                int *flag, MPI_Status *array_of_statuses)
use mpi
CALL MPI_TESTALL(count, array_of_requests, flag,
                 array_of_statuses, ierr)
LOGICAL     :: flag
INTEGER     :: count, array_of_requests(*), array_of_statuses(MPI_STATUS_SIZE, *), ierr
count IN List length (integer).
array_of_requests INOUT Array of requests (array of handles).
flag OUT (logical)
array_of_statuses OUT Array of status objects (array of status).
ierr OUT Return code (Fortran only).



MPI_Testany (C) MPI_TESTANY (Fortran)

MPI_Testany determines whether any previously initiated communication are complete.

#include <mpi.h>
int MPI_Testany(int count, MPI_Request *array_of_requests,
                int *index, int *flag, MPI_Status *status)
use mpi
CALL MPI_TESTANY(count, array_of_requests, index, flag, status, ierr)
LOGICAL     :: flag
INTEGER     :: count, array_of_requests(*), index, status(MPI_STATUS_SIZE), ierr
count IN List length (integer).
array_of_requests INOUT Array of requests (array of handles).
index OUT Index of completed operation or MPI_UNDEFINED if none are complete (integer).
flag OUT True if an operation is complete, otherwise false (logical).
status OUT Status object (status).
ierr OUT Return code (Fortran only).



MPI_Testsome (C) MPI_TESTSOME (Fortran)

MPI_Testsome determines whether certain previously initiated communication are complete.

#include <mpi.h>
int MPI_Testsome(int incount, MPI_Request *array_of_requests,
                 int *outcount, int *array_of_indices,
                 MPI_Status *array_of_statuses)
use mpi
CALL MPI_TESTSOME(incount, array_of_requests, outcount,
                  array_of_indices, array_of_statuses, ierr)
INTEGER     :: incount, array_of_requests(*), outcount,
               array_of_indices(*), array_of_statuses(MPI_STATUS_SIZE, *), ierr
incount IN List length (integer).
array_of_requests INOUT Array of requests (array of handles).
outcount OUT Number of completed requests (integer).
array_of_indices OUT Array of indices of complete operations (array of integers).
array_of_statuses OUT Array of status objects of complete operations (array of status).
ierr OUT Return code (Fortran only).



MPI_Wait (C) MPI_WAIT (Fortran)

MPI_Wait waits for an MPI send or receive to complete.

#include <mpi.h>
int MPI_Wait(MPI_Request *request, MPI_Status *status)
use mpi
CALL MPI_WAIT(request, status, ierr)
INTEGER     :: request, status(MPI_STATUS_SIZE), ierr
request INOUT Request (handle).
status OUT Status object (status).
ierr OUT Return code (Fortran only).



MPI_Waitall (C) MPI_WAITALL (Fortran)

MPI_Waitall waits for all specified communication to complete.

#include <mpi.h>
int MPI_Waitall(int count, MPI_Request *array_of_requests,
                MPI_Status *array_of_statuses)
use mpi
CALL MPI_WAITALL(count, array_of_requests, array_of_statuses,
                 ierr)
INTEGER     :: count, array_of_requests(*), array_of_statuses(MPI_STATUS_SIZE, *), ierr
count IN List length (integer).
array_of_requests INOUT Array of requests (array of handles).
array_of_statuses OUT Array of status objects (array of status).
ierr OUT Return code (Fortran only).



MPI_Waitany (C) MPI_WAITANY (Fortran)

MPI_Waitany waits for any specified communication to complete.

#include <mpi.h>
int MPI_Waitany(int count, MPI_Request *array_of_requests,
                int *index, MPI_Status *status)
use mpi
CALL MPI_WAITANY(count, array_of_requests, index, status, ierr)
INTEGER     :: count, array_of_requests(*), index, status(MPI_STATUS_SIZE), ierr
count IN List length (integer).
array_of_requests INOUT Array of requests (array of handles).
index OUT Index of handle of completed operation (integer).
status OUT Status object (status).
ierr OUT Return code (Fortran only).



MPI_Waitsome (C) MPI_WAITSOME (Fortran)

MPI_Waitsome waits for certain specified communication to complete.

#include <mpi.h>
int MPI_Waitsome(int incount, MPI_Request *array_of_requests,
                 int *outcount, int *array_of_indices,
                 MPI_Status *array_of_statuses)
use mpi
CALL MPI_WAITSOME(incount, array_of_requests, outcount,
                  array_of_indices, array_of_statuses, ierr)
INTEGER     :: incount, array_of_requests(*), outcount,
               array_of_indices(*), array_of_statuses(MPI_STATUS_SIZE, *), ierr
incount IN Length of array_of_requests (integer).
array_of_requests INOUT Array of requests (array of handles).
outcount OUT Number of completed requests (integer).
array_of_indices OUT Array of indices of completed operations (array of integers).
array_of_statuses OUT Array of status objects of completed operations (array of status).
ierr OUT Return code (Fortran only).


4.2   Datatypes

MPI_Aint_add/MPI_AINT_ADD Returns the sum of the base and disp arguments
MPI_Aint_diff/MPI_AINT_DIFF Returns the difference between addr1 and addr2 arguments
MPI_Get_address/MPI_GET_ADDRESS Returns the address of a location in memory
MPI_Get_elements/MPI_GET_ELEMENTS Obtains the number of elements by the unit of basic data size from received message information
MPI_Get_elements_x/MPI_GET_ELEMENTS_X Obtains the number of elements by the unit of basic data size from received message information
MPI_Pack/MPI_PACK Packs a given set of data
MPI_Pack_external/MPI_PACK_EXTERNAL Packs data into a buffer in the "external32" data format
MPI_Pack_external_size/MPI_PACK_EXTERNAL_SIZE Calculates the size needed to pack data into a buffer in the "external32" data format
MPI_Pack_size/MPI_PACK_SIZE Determines the packing size
MPI_Type_commit/MPI_TYPE_COMMIT Registers a data type
MPI_Type_contiguous/MPI_TYPE_CONTIGUOUS Generates a new data type by repeating a given data type continuously
MPI_Type_create_darray/MPI_TYPE_CREATE_DARRAY Creates a distributed array datatype
MPI_Type_create_hindexed/MPI_TYPE_CREATE_HINDEXED Creates an indexed datatype with offsets in bytes
MPI_Type_create_hindexed_block/MPI_TYPE_CREATE_HINDEXED_BLOCK Creates an indexed datatype with constant sized blocks with offsets in bytes
MPI_Type_create_hvector/MPI_TYPE_CREATE_HVECTOR Creates a vector (stridden) datatype with an offset in bytes
MPI_Type_create_indexed_block/MPI_TYPE_CREATE_INDEXED_BLOCK Creates an indexed datatype with constant sized blocks
MPI_Type_create_resized/MPI_TYPE_CREATE_RESIZED Existing datatype
MPI_Type_create_struct/MPI_TYPE_CREATE_STRUCT Creates a struct datatype
MPI_Type_create_subarray/MPI_TYPE_CREATE_SUBARRAY Creates a subarray datatype
MPI_Type_dup/MPI_TYPE_DUP Duplicates an existing datatype
MPI_Type_free/MPI_TYPE_FREE Frees a data type
MPI_Type_get_contents/MPI_TYPE_GET_CONTENTS Returns the actual arguments used in a call to create a datatype
MPI_Type_get_envelope/MPI_TYPE_GET_ENVELOPE Returns the number and type of input arguments used in a call to create a datatype
MPI_Type_get_extent/MPI_TYPE_GET_EXTENT Returns the lower bound and extent of a datatype
MPI_Type_get_extent_x/MPI_TYPE_GET_EXTENT_X Returns the lower bound and extent of a datatype
MPI_Type_get_true_extent/MPI_TYPE_GET_TRUE_EXTENT Returns the true extent of a datatype
MPI_Type_get_true_extent_x/MPI_TYPE_GET_TRUE_EXTENT_X Returns the true extent of a datatype
MPI_Type_indexed/MPI_TYPE_INDEXED Generates a new data type by specifying the number of elements of blocks and the starting positions (in bytes) of the blocks
MPI_Type_size/MPI_TYPE_SIZE Returns the size of a data type
MPI_Type_size_x/MPI_TYPE_SIZE_X Returns the size of a datatype
MPI_Type_vector/MPI_TYPE_VECTOR Generates a new data type by specifying the number of block elements and the interval (in bytes) between blocks
MPI_Unpack/MPI_UNPACK Unpacks a given set of data
MPI_Unpack_external/MPI_UNPACK_EXTERNAL Unpacks data from a buffer in the "external32" data format



MPI_Aint_add (C) MPI_AINT_ADD (Fortran)

MPI_Aint_add returns the sum of the base and disp arguments

#include <mpi.h>
int MPI_Aint_add(MPI_Aint base, MPI_Aint disp)
use mpi
INTEGER(KIND=MPI_ADDRESS_KIND) MPI_AINT_ADD(base, disp)
INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN) :: base, disp
base IN A base address returned by a call to MPI_GET_ADDRESS (integer).
disp IN A signed integer displacement (integer).



MPI_Aint_diff (C) MPI_AINT_DIFF (Fortran)

MPI_Aint_diff returns the difference between addr1 and addr2 arguments

#include <mpi.h>
int MPI_Aint_diff(MPI_Aint addr1, MPI_Aint addr2)
use mpi
INTEGER(KIND=MPI_ADDRESS_KIND) MPI_AINT_DIFF(addr1, addr2)
INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN) :: addr1, addr2
addr1 IN An address returned by a call to MPI_GET_ADDRESS (integer).
addr2 IN An address returned by a call to MPI_GET_ADDRESS (integer).



MPI_Get_address (C) MPI_GET_ADDRESS (Fortran)

MPI_Get_address returns the address of a location in memory.

#include <mpi.h>
int MPI_Get_address(void *location, MPI_Aint *address)
use mpi
CALL MPI_GET_ADDRESS(location, address, ierr)
<arbitrary>                    :: location(*)
INTEGER                        :: ierr
INTEGER(KIND=MPI_ADDRESS_KIND) :: address
location IN Location in caller memory (choice).
address OUT Address of location (integer).
ierr OUT Return code (Fortran only).



MPI_Get_elements (C) MPI_GET_ELEMENTS (Fortran)

MPI_Get_elements returns the number of basic elements in a datatype.

#include <mpi.h>
int MPI_Get_elements(MPI_Status *status, MPI_Datatype datatype,
                     int *elements)
use mpi
CALL MPI_GET_ELEMENTS(status, datatype, elements, ierr)
INTEGER     :: status(MPI_STATUS_SIZE), datatype, elements, ierr
status IN Return status of receive operation (status).
datatype IN Datatype used by receive operation (handle).
elements OUT Number of received basic elements (integer).
ierr OUT Return code (Fortran only).


NOTE



MPI_Get_elements_x (C) MPI_GET_ELEMENTS_X (Fortran)

MPI_Get_elements_x returns the number of basic elements in a datatype.

#include <mpi.h>
int MPI_Get_elements_x(MPI_Status *status, MPI_Datatype datatype,
                       MPI_Count *elements)
use mpi
CALL MPI_GET_ELEMENTS_X(status, datatype, elements, ierr)
INTEGER                       :: status(MPI_STATUS_SIZE), datatype, ierr
INTEGER (KIND=MPI_COUNT_KIND) :: elements
status IN Return status of receive operation (status).
datatype IN Datatype used by receive operation (handle).
elements OUT Number of received basic elements (integer).
ierr OUT Return code (Fortran only).


NOTE



MPI_Pack (C) MPI_PACK (Fortran)

MPI_Pack packs data into a buffer.

#include <mpi.h>
int MPI_Pack(void *inbuf, int incount, MPI_Datatype datatype,
             void *outbuf, int outcount, int *position,
             MPI_Comm comm)
use mpi
CALL MPI_PACK(inbuf, incount, datatype, outbuf, outcount,
              position, comm, ierr)
<arbitrary> :: inbuf(*), outbuf(*)
INTEGER     :: incount, datatype, outcount, position, comm, ierr
inbuf IN Initial address of input buffer (choice).
incount IN Number of input data items (integer).
datatype IN Datatype of each input data item (handle).
outbuf OUT Initial address of output buffer (choice).
outcount IN Output buffer size in bytes (integer).
position INOUT Current position in buffer in bytes (integer).
comm IN Communicator for packed message (handle).
ierr OUT Return code (Fortran only).



MPI_Pack_external (C) MPI_PACK_EXTERNAL (Fortran)

MPI_Pack_external packs data into a buffer in the "external32" data format.

#include <mpi.h>
int MPI_Pack_external(char *datarep, void *inbuf, int incount,
                      MPI_Datatype datatype, void *outbuf,
                      MPI_Aint outsize, MPI_Aint *position)
use mpi
CALL MPI_PACK_EXTERNAL(datarep, inbuf, incount, datatype, outbuf,
                       outsize, position, ierr)
CHARACTER*(*)                  :: datarep
<arbitrary>                    :: inbuf(*), outbuf(*)
INTEGER                        :: incount, datatype, ierr
INTEGER(KIND=MPI_ADDRESS_KIND) :: outsize, position
datarep IN Data representation (string).
inbuf IN Initial address of input buffer (choice).
incount IN Number of input data items (integer).
datatype IN Datatype of each input data item (handle).
outbuf OUT Initial address of output buffer (choice).
outsize IN Output buffer size in bytes (integer).
position INOUT Current position in buffer in bytes (integer).
ierr OUT Return code (Fortran only).



MPI_Pack_external_size (C) MPI_PACK_EXTERNAL_SIZE (Fortran)

MPI_Pack_external_size calculates the size needed to pack data into a buffer in the "external32" data format.

#include <mpi.h>
int MPI_Pack_external_size(char *datarep, int incount,
                           MPI_Datatype datatype, MPI_Aint *size)
use mpi
CALL MPI_PACK_EXTERNAL_SIZE(datarep, incount, datatype, size,
                            ierr)
CHARACTER*(*)                  :: datarep
INTEGER                        :: incount, datatype, ierr
INTEGER(KIND=MPI_ADDRESS_KIND) :: size
datarep IN Data representation (string).
incount IN Number of input data items (integer).
datatype IN Datatype of each input data item (handle).
size IN Output buffer size in bytes (integer).
ierr OUT Return code (Fortran only).



MPI_Pack_size (C) MPI_PACK_SIZE (Fortran)

MPI_Pack_size returns the upper bound of the amount of space needed to pack a message.

#include <mpi.h>
int MPI_Pack_size(int incount, MPI_Datatype datatype,
                  MPI_Comm comm, int *size)
use mpi
CALL MPI_PACK_SIZE(incount, datatype, comm, size, ierr)
INTEGER     :: incount, datatype, comm, size, ierr
incount IN Number of input data items (integer).
datatype IN Datatype of each input data item (handle).
comm IN Communicator (handle).
size OUT Upper bound of size of packed message in bytes (integer).
ierr OUT Return code (Fortran only).



MPI_Type_commit (C) MPI_TYPE_COMMIT (Fortran)

MPI_Type_commit commits a datatype.

#include <mpi.h>
int MPI_Type_commit(MPI_Datatype *datatype)
use mpi
CALL MPI_TYPE_COMMIT(datatype, ierr)
INTEGER     :: datatype, ierr
datatype INOUT Datatype (handle).
ierr OUT Return code (Fortran only).



MPI_Type_contiguous (C) MPI_TYPE_CONTIGUOUS (Fortran)

MPI_Type_contiguous creates a contiguous datatype.

#include <mpi.h>
int MPI_Type_contiguous(int count, MPI_Datatype oldtype,
                        MPI_Datatype *newtype)
use mpi
CALL MPI_TYPE_CONTIGUOUS(count, oldtype, newtype, ierr)
INTEGER     :: count, oldtype, newtype, ierr
count IN Replication count (nonnegative integer).
oldtype IN Old datatype (handle).
newtype OUT New datatype (handle).
ierr OUT Return code (Fortran only).



MPI_Type_create_darray (C) MPI_TYPE_CREATE_DARRAY (Fortran)

MPI_Type_create_darray creates a distributed array datatype.

#include <mpi.h>
int MPI_Type_create_darray(int size, int rank, int ndims,
                           int *array_of_gsizes, int *array_of_distribs,
                           int *array_of_dargs, int *array_of_psizes,
                           int order, MPI_Datatype oldtype,
                           MPI_Datatype *newtype)
use mpi
CALL MPI_TYPE_CREATE_DARRAY(size, rank, ndims, array_of_gsizes,
                            array_of_distribs, array_of_dargs, 
                            array_of_psizes, order, oldtype, 
                            newtype, ierr)
INTEGER    :: size, rank, ndims, array_of_gsizes(*),
              array_of_distribs(*), array_of_dargs(*), array_of_psizes(*),
              order, oldtype, newtype, ierr
size IN Size of process group (positive integer).
rank IN Rank in process group (nonnegative integer).
ndims IN Number of array dimensions and number of process grid dimensions (positive integer).
array_of_gsizes IN Number of elements of datatype oldtype in each dimension of global array (array of positive integers).
array_of_distribs IN Distribution of array in each dimension (array of state).
array_of_dargs IN Distribution argument in each dimension (array of positive integers).
array_of_psizes IN Size of process grid in each dimension (array of positive integers).
order IN Array storage order flag (state).
oldtype IN Old datatype (handle).
newtype OUT New datatype (handle).
ierr OUT Return code (Fortran only).



MPI_Type_create_hindexed (C) MPI_TYPE_CREATE_HINDEXED (Fortran)

MPI_Type_create_hindexed creates an indexed datatype with offsets in bytes.

#include <mpi.h>
int MPI_Type_create_hindexed(int count,
                             int *array_of_blocklengths,
                             MPI_Aint *array_of_displacements,
                             MPI_Datatype oldtype, 
                             MPI_Datatype *newtype)
use mpi
CALL MPI_TYPE_CREATE_HINDEXED(count, array_of_blocklenghts,
                              array_of_displacements, oldtype, 
                              newtype, ierr)
INTEGER                        :: count, array_of_blocklenghts(*),
                                  oldtype, newtype, ierr
INTEGER(KIND=MPI_ADDRESS_KIND) :: array_of_displacements(*)
count IN Number of blocks and number of entries in array_of_blocklengths and array_of_displacements (integer).
array_of_blocklengths IN Number of elements in each block (array of nonnegative integers).
array_of_displacements IN Byte displacement of each block (array of integers).
oldtype IN Old datatype (handle).
newtype OUT New datatype (handle).
ierr OUT Return code (Fortran only).



MPI_Type_create_hindexed_block (C) MPI_TYPE_CREATE_HINDEXED_BLOCK (Fortran)

MPI_Type_create_hindexed_block creates an indexed datatype with constant sized blocks with offsets in bytes.

#include <mpi.h>
int MPI_Type_create_hindexed_block(int count, int blocklength, const
                                   MPI_Aint array_of_displacements[],
                                   MPI_Datatype oldtype, MPI_Datatype *newtype)
use mpi
CALL MPI_TYPE_CREATE_HINDEXED_BLOCK(count, array_of_blocklengths,
                                    array_of_displacements,
                                    array_of_types, newtype, ierr)
INTEGER                        :: count, array_of_blocklengths(*),
                                  array_of_types(*), newtype,ierr
INTEGER(KIND=MPI_ADDRESS_KIND) :: array_of_displacements(*)
count IN Length of array of displacements (nonnegative integer).
blocklength IN Size of block. (nonnegative integer).
array_of_displacements IN Byte displacement of each block (array of integer).
oldtype IN Old datatype (handle).
newtype OUT New datatype (handle).
ierr OUT Return code (Fortran only).



MPI_Type_create_hvector (C) MPI_TYPE_CREATE_HVECTOR (Fortran)

MPI_Type_create_hvector creates a vector (stridden) datatype with an offset in bytes.

#include <mpi.h>
int MPI_Type_create_hvector(int count, int blocklength,
                            MPI_Aint stride, MPI_Datatype oldtype,
                            MPI_Datatype *newtype)
use mpi
CALL MPI_TYPE_CREATE_HVECTOR(count, blocklength, stride, oldtype,
                             newtype, ierr)
INTEGER                        :: count, blocklength, oldtype,
                                  newtype, ierr
INTEGER(KIND=MPI_ADDRESS_KIND) :: stride
count IN Number of blocks (nonnegative integer).
blocklength IN Number of elements in each block (nonnegative integer).
stride IN Number of bytes between start of each block (integer).
oldtype IN Old datatype (handle).
newtype OUT New datatype (handle).
ierr OUT Return code (Fortran only).



MPI_Type_create_indexed_block (C) MPI_TYPE_CREATE_INDEXED_BLOCK (Fortran)

MPI_Type_create_indexed_block creates an indexed datatype with constant sized blocks.

#include <mpi.h>
int MPI_Type_create_indexed_block(int count, int blocklength,
                                  int *array_of_displacements,
                                  MPI_Datatype old_type, 
                                  MPI_Datatype *newtype)
use mpi
CALL MPI_TYPE_CREATE_INDEXED_BLOCK(count, blocklength, 
                                   array_of_displacements, 
                                   old_type, newtype, ierr)
INTEGER     :: count, blocklength, array_of_displacements(*),
               old_type, newtype, ierr
count IN Number of blocks and number of entries in indices and blocklens (nonnegative integer).
blocklength IN Number of elements in each block (nonnegative integer).
array_of_displacements IN Displacement of each block in multiples of oldtype (array of integers).
oldtype IN Old datatype (handle).
newtype OUT New datatype (handle).
ierr OUT Return code (Fortran only).



MPI_Type_create_resized (C) MPI_TYPE_CREATE_RESIZED (Fortran)

MPI_Type_create_resized creates a new datatype by resizing the lower and upper bounds of an existing datatype.

#include <mpi.h>
int MPI_Type_create_resized(MPI_Datatype oldtype, MPI_Aint lb,
                            MPI_Aint extent, MPI_Datatype *newtype)
use mpi
CALL MPI_TYPE_CREATE_RESIZED(oldtype, lb, extent, newtype, ierr)
INTEGER                        :: oldtype, newtype, ierr
INTEGER(KIND=MPI_ADDRESS_KIND) :: lb, extent
oldtype IN Old datatype (handle).
lb IN New lower bound of datatype (integer).
extent IN New extent of datatype (integer).
newtype OUT New datatype (handle).
ierr OUT Return code (Fortran only).



MPI_Type_create_struct (C) MPI_TYPE_CREATE_STRUCT (Fortran)

MPI_Type_create_struct creates a struct datatype.

#include <mpi.h>
int MPI_Type_create_struct(int count, int *array_of_blocklengths,
                           MPI_Aint *array_of_displacements,
                           MPI_Datatype *array_of_types,
                           MPI_Datatype *newtype)
use mpi
CALL MPI_TYPE_CREATE_STRUCT(count, array_of_blocklengths,
                            array_of_displacements, array_of_types,
                            newtype, ierr)
INTEGER                        :: count, array_of_blocklengths(*),
                                  array_of_types(*), newtype, ierr
INTEGER(KIND=MPI_ADDRESS_KIND) :: array_of_displacements(*)
count IN Number of blocks and number of entries in array_of_blocklenghts, array_of_displacements, and array_of_types (integer).
array_of_blocklengths IN Number of elements in each block (array of integers).
array_of_displacements IN Byte displacement of each block (array of integers).
array_of_types IN Type of elements in each block (array of handles to datatype objects).
newtype OUT New datatype (handle).
ierr OUT Return code (Fortran only).



MPI_Type_create_subarray (C) MPI_TYPE_CREATE_SUBARRAY (Fortran)

MPI_Type_create_subarray creates a subarray datatype.

#include <mpi.h>
int MPI_Type_create_subarray(int ndims, int *array_of_sizes,
                             int *array_of_subsizes, int *array_of_starts,
                             int order, MPI_Datatype oldtype,
                             MPI_Datatype *newtype)
use mpi
CALL MPI_TYPE_CREATE_SUBARRAY(ndims, array_of_sizes,
                              array_of_subsizes, array_of_starts, order,
                              oldtype, newtype, ierr)
INTEGER     :: ndims, array_of_sizes(*), array_of_subsizes(*),
               array_of_starts(*), order, oldtype, newtype, ierr
ndims IN Number of array dimensions (positive integer).
array_of_sizes IN Number of elements of datatype oldtype in each dimension of the full array (array of positive integers).
array_of_subsizes IN Number of elements of datatype oldtype in each dimension of the subarray (array of positive integers).
array_of_starts IN Starting coordinates of the subarray in each dimension (array of nonnegative integers).
order IN Array storage order flag (state).
oldtype IN Array element datatype (handle).
newtype OUT New datatype (handle).
ierr OUT Return code (Fortran only).



MPI_Type_dup (C) MPI_TYPE_DUP (Fortran)

MPI_Type_dup duplicates an existing datatype.

#include <mpi.h>
int MPI_Type_dup(MPI_Datatype type, MPI_Datatype *newtype)
use mpi
CALL MPI_TYPE_DUP(type, newtype, ierr)
INTEGER     :: type, newtype, ierr
type IN Datatype (handle).
newtype OUT Duplicated datatype (handle).
ierr OUT Return code (Fortran only).



MPI_Type_free (C) MPI_TYPE_FREE (Fortran)

MPI_Type_free frees a datatype.

#include <mpi.h>
int MPI_Type_free(MPI_Datatype *datatype)
use mpi
CALL MPI_TYPE_FREE(datatype, ierr)
INTEGER     :: datatype, ierr
datatype INOUT Datatype (handle).
ierr OUT Return code (Fortran only).



MPI_Type_get_contents (C) MPI_TYPE_GET_CONTENTS (Fortran)

MPI_Type_get_contents returns the actual arguments used in a call to create a datatype.

#include <mpi.h>
int MPI_Type_get_contents(MPI_Datatype datatype,
                          int max_integers, int max_addresses, int max_datatypes,
                          int *array_of_integers, MPI_Aint *array_of_addresses,
                          MPI_Datatype *array_of_datatypes)
use mpi
CALL MPI_TYPE_GET_CONTENTS(datatype, max_integers,
                           max_addresses, max_datatypes, array_of_integers,
                           array_of_addresses, array_of_datatypes, ierr)
INTEGER                        :: datatype, max_integers, max_addresses,
                                  max_datatypes, array_of_integers(*),
                                  array_of_datatypes(*), ierr
INTEGER(KIND=MPI_ADDRESS_KIND) :: array_of_addresses(*)
datatype IN Datatype (handle).
max_integers IN Number of elements in array_of_integers (nonnegative integer).
max_addresses IN Number of elements in array_of_addresses (nonnegative integer).
max_datatypes IN Number of elements in array_of_datatypes (nonnegative integer).
array_of_integers OUT Integer arguments used in creating datatype (array of integers).
array_of_addresses OUT Address arguments used in creating datatype (array of integers).
array_of_datatypes OUT Datatype arguments used in creating datatype (array of handles).
ierr OUT Return code (Fortran only).



MPI_Type_get_envelope (C) MPI_TYPE_GET_ENVELOPE (Fortran)

MPI_Type_get_envelope returns the number and type of input arguments used in a call to create a datatype.

#include <mpi.h>
int MPI_Type_get_envelope(MPI_Datatype datatype,
                          int *num_integers, int *mum_addresses,
                          int *num_datatypes, int *combiner)
use mpi
CALL MPI_TYPE_GET_ENVELOPE(datatype, num_integers,
                           num_addresses, num_datatypes, combiner, ierr)
INTEGER     :: datatype, num_integers, num_addresses,
               num_datatypes, combiner, ierr
datatype IN Datatype (handle).
num_integers OUT Number of input integers used in the call to construct combiner (nonnegative integer).
num_addresses OUT Number of input addresses used in the call to construct combiner (nonnegative integer).
num_datatypes OUT Number of input datatypes used in the call to construct combiner (nonnegative integer).
combiner OUT Combiner (state).
ierr OUT Return code (Fortran only).



MPI_Type_get_extent (C) MPI_TYPE_GET_EXTENT (Fortran)

MPI_Type_get_extent returns the lower bound and extent of a datatype.

#include <mpi.h>
int MPI_Type_get_extent(MPI_Datatype datatype, MPI_Aint *lb,
                        MPI_Aint *extent)
use mpi
CALL MPI_TYPE_GET_EXTENT(datatype, lb, extent, ierr)
INTEGER                        :: datatype, ierr
INTEGER(KIND=MPI_ADDRESS_KIND) :: lb, extent
datatype IN Datatype (handle).
lb OUT Lower bound of datatype (integer).
extent OUT Extent of datatype (integer).
ierr OUT Return code (Fortran only).



MPI_Type_get_extent_x (C) MPI_TYPE_GET_EXTENT_X (Fortran)

MPI_Type_get_extent_x returns the lower bound and extent of a datatype.

#include <mpi.h>
int MPI_Type_get_extent_x(MPI_Datatype datatype, MPI_Aint *lb,
                          MPI_Count *extent)
use mpi
CALL MPI_TYPE_GET_EXTENT_X(datatype, lb, extent, ierr)
INTEGER                      :: datatype, ierr
INTEGER(KIND=MPI_COUNT_KIND) :: lb, extent
datatype IN Datatype (handle).
lb OUT Lower bound of datatype (integer).
extent OUT Extent of datatype (integer).
ierr OUT Return code (Fortran only).



MPI_Type_get_true_extent (C) MPI_TYPE_GET_TRUE_EXTENT (Fortran)

MPI_Type_get_true_extent returns the true extent of a datatype.

#include <mpi.h>
int MPI_Type_get_true_extent(MPI_Datatype datatype, MPI_Count *lb,
                             MPI_Count *extent)
use mpi
CALL MPI_TYPE_GET_TRUE_EXTENT(datatype, lb, extent, ierr)
INTEGER                      :: datatype, ierr
INTEGER(KIND=MPI_COUNT_KIND) :: lb, extent
datatype IN Datatype (handle).
lb OUT True lower bound of datatype (integer).
extent OUT True extent of datatype (integer).
ierr OUT Return code (Fortran only).



MPI_Type_get_true_extent_x (C) MPI_TYPE_GET_TRUE_EXTENT_X (Fortran)

MPI_Type_get_true_extent_x returns the true extent of a datatype.

#include <mpi.h>
int MPI_Type_get_true_extent_x(MPI_Datatype datatype, MPI_Count *true_lb, 
                               MPI_Count *true_extent)
use mpi
CALL MPI_TYPE_GET_TRUE_EXTENT_X(datatype, true_lb, true_extent,
                              ierr)
INTEGER                      :: datatype, ierr
INTEGER(KIND=MPI_COUNT_KIND) :: true_lb, true_extent
datatype IN Datatype (handle).
true_lb OUT True lower bound of datatype (integer).
true_extent OUT True extent of datatype (integer).
ierr OUT Return code (Fortran only).



MPI_Type_indexed (C) MPI_TYPE_INDEXED (Fortran)

MPI_Type_indexed creates an indexed datatype.

#include <mpi.h>
int MPI_Type_indexed(int count, int *blocklens,
                     int *indices, MPI_Datatype oldtype,
                     MPI_Datatype *newtype)
use mpi
CALL MPI_TYPE_INDEXED(count, blocklens, indices, oldtype,
                      newtype, ierr)
INTEGER     :: count, blocklens(*), indices(*), oldtype, newtype, ierr
count IN Number of blocks and number of entries in indices and blocklens (nonnegative integer).
blocklens IN Number of elements in each block (array of nonnegative integers).
indices IN Displacement of each block in multiples of oldtype (array of integers).
oldtype IN Old datatype (handle).
newtype OUT New datatype (handle).
ierr OUT Return code (Fortran only).



MPI_Type_size (C) MPI_TYPE_SIZE (Fortran)

MPI_Type_size returns the size of a datatype in bytes.

#include <mpi.h>
int MPI_Type_size(MPI_Datatype datatype, int *size)
use mpi
CALL MPI_TYPE_SIZE(datatype, size, ierr)
INTEGER     :: datatype, size, ierr
datatype IN Datatype (handle).
size OUT Size of datatype in bytes (integer).
ierr OUT Return code (Fortran only).



MPI_Type_size_x (C) MPI_TYPE_SIZE_X (Fortran)

MPI_Type_size returns the size of a datatype in bytes.

#include <mpi.h>
int MPI_Type_size_x(MPI_Datatype datatype, MPI_Count *size)
use mpi
CALL MPI_TYPE_SIZE_X(datatype, size, ierr)
INTEGER                        :: datatype, ierr
INTEGER(KIND = MPI_COUNT_KIND) :: size
datatype IN Datatype (handle).
size OUT Size of datatype in bytes (integer).
ierr OUT Return code (Fortran only).



MPI_Type_vector (C) MPI_TYPE_VECTOR (Fortran)

MPI_Type_vector creates a vector (stridden) datatype.

#include <mpi.h>
int MPI_Type_vector(int count, int blocklen,
                    int stride, MPI_Datatype oldtype,
                    MPI_Datatype *newtype)
use mpi
CALL MPI_TYPE_VECTOR(count, blocklen, stride, oldtype,
                     newtype, ierr)
INTEGER     :: count, blocklen, stride, oldtype, newtype, ierr
count IN Number of blocks (nonnegative integer).
blocklen IN Number of elements in each block (nonnegative integer).
stride IN Number of elements between start of each block (integer).
oldtype IN Old datatype (handle).
newtype OUT New datatype (handle).
ierr OUT Return code (Fortran only).



MPI_Unpack (C) MPI_UNPACK (Fortran)

MPI_Unpack unpacks data into a buffer.

#include <mpi.h>
int MPI_Unpack(void *inarea, int insize, int *position,
               void *outdata, int outcount, MPI_Datatype datatype,
               MPI_Comm comm)
use mpi
CALL MPI_UNPACK(inarea, insize, position, outdata, outcount,
                datatype, comm, ierr)
<arbitrary> :: inarea(*), outdata(*)
INTEGER     :: insize, position, outcount, datatype, comm, ierr
inarea IN Input buffer start(choice).
insize IN Size of input buffer, in bytes (integer).
position INOUT Current position in bytes (integer).
outdata OUT Output buffer start (choice).
outcount IN Number of items to be unpacked (integer).
datatype IN Datatype of each output data item (handle).
comm IN Communicator for packed message (handle).
ierr OUT Return code (Fortran only).



MPI_Unpack_external (C) MPI_UNPACK_EXTERNAL (Fortran)

MPI_Unpack_external unpacks data from a buffer in the "external32" data format.

#include <mpi.h>
int MPI_Unpack_external(char *datarep, void *inbuf, MPI_Aint insize,
                        MPI_Aint *position, void *outbuf,
                        int outcount, MPI_Datatype datatype)
use mpi
CALL MPI_UNPACK_EXTERNAL(datarep, inbuf, insize, position,
                         outbuf, outcount, datatype, ierr)
CHARACTER*(*)                  :: datarep
<arbitrary>                    :: inbuf(*), outbuf(*)
INTEGER                        :: datatype, outcount, ierr
INTEGER(KIND=MPI_ADDRESS_KIND) :: insize, position
datarep IN Data representation (string).
inbuf IN Initial address of input buffer (choice).
insize IN Input buffer size in bytes (integer).
position INOUT Current position in buffer in bytes (integer).
outbuf OUT Initial address of output buffer (choice).
outcount IN Number of output data items (integer).
datatype IN Datatype of each output data item (handle).
ierr OUT Return code (Fortran only).


4.3   Collective Communication

MPI_Allgather/MPI_ALLGATHER Gathers and scatters data
MPI_Allgatherv/MPI_ALLGATHERV Gathers and scatters data
MPI_Allreduce/MPI_ALLREDUCE Performs a reduction operation and scatters the result to all processes
MPI_Alltoall/MPI_ALLTOALL Gathers and scatters data
MPI_Alltoallv/MPI_ALLTOALLV Gathers and scatters data
MPI_Alltoallw/MPI_ALLTOALLW Gathers and scatters data
MPI_Barrier/MPI_BARRIER Performs barrier synchronization
MPI_Bcast/MPI_BCAST Performs broadcasting
MPI_Exscan/MPI_EXSCAN Performs a reduction operation
MPI_Gather/MPI_GATHER Gathers data
MPI_Gatherv/MPI_GATHERV Gathers data
MPI_Iallgather/MPI_IALLGATHER Gathers and scatters data (nonblocking version)
MPI_Iallgatherv/MPI_IALLGATHERV Gathers and scatters data (nonblocking version)
MPI_Iallreduce/MPI_IALLREDUCE Performs a reduction operation and scatters the result to all processes (nonblocking version)
MPI_Ialltoall/MPI_IALLTOALL Gathers and scatters data (nonblocking version)
MPI_Ialltoallv/MPI_IALLTOALLV Gathers and scatters data (nonblocking version)
MPI_Ialltoallw/MPI_IALLTOALLW Gathers and scatters data (nonblocking version)
MPI_Ibarrier/MPI_IBARRIER Performs barrier synchronization (nonblocking version)
MPI_Ibcast/MPI_IBCAST Performs broadcasting (nonblocking version)
MPI_Iexscan/MPI_IEXSCAN Performs a reduction operation (nonblocking version)
MPI_Igather/MPI_IGATHER Gathers data (nonblocking version)
MPI_Igatherv/MPI_IGATHERV Gathers data (nonblocking version)
MPI_Ireduce/MPI_IREDUCE Performs a reduction operation (nonblocking version)
MPI_Ireduce_scatter/MPI_IREDUCE_SCATTER Performs a reduction operation and scatters the results to processes (nonblocking version)
MPI_Ireduce_scatter_block/MPI_IREDUCE_SCATTER_BLOCK Performs a reduction operation and scatters the results to processes (nonblocking version)
MPI_Iscan/MPI_ISCAN Performs a reduction operation (nonblocking version)
MPI_Iscatter/MPI_ISCATTER Scatters data (nonblocking version)
MPI_Iscatterv/MPI_ISCATTERV Scatters data (nonblocking version)
MPI_Op_commutative/MPI_OP_COMMUTATIVE Queries for a commutativity of a reduction operation
MPI_Op_create/MPI_OP_CREATE Assigns a user-defined function for the reduction operation
MPI_Op_free/MPI_OP_FREE Frees a user-defined function that was assigned for the reduction operation 
MPI_Reduce/MPI_REDUCE Performs a reduction operation
MPI_Reduce_local/MPI_REDUCE_LOCAL Performs a process-local reduction operation
MPI_Reduce_scatter/MPI_REDUCE_SCATTER Performs a reduction operation and scatters the results to processes
MPI_Reduce_scatter_block/MPI_REDUCE_SCATTER_BLOCK Performs a reduction operation and scatters the results to processes
MPI_Scan/MPI_SCAN Performs a reduction operation
MPI_Scatter/MPI_SCATTER Scatters data
MPI_Scatterv/MPI_SCATTERV Scatters data



MPI_Allgather (C) MPI_ALLGATHER (Fortran)

MPI_Allgather gathers data from all tasks and distributes it to all.

#include <mpi.h>
int MPI_Allgather(void *sendbuf, int sendcount,
                  MPI_Datatype sendtype, void *recvbuf,
                  int recvcount, MPI_Datatype recvtype,
                  MPI_Comm comm)
use mpi
CALL MPI_ALLGATHER(sendbuf, sendcount, sendtype, recvbuf,
                   recvcount, recvtype, comm, ierr)
<arbitrary> :: sendbuf(*), recvbuf(*)
INTEGER     :: sendcount, sendtype, recvcount, recvtype, comm,
               ierr
sendbuf IN Initial address of send buffer (choice).
sendcount IN Number of elements in send buffer (integer).
sendtype IN Datatype of send buffer elements (handle).
recvbuf OUT Address of receive buffer (choice).
recvcount IN Number of elements received from any process (integer).
recvtype IN Datatype of receive buffer elements (handle).
comm IN Communicator (handle).
ierr OUT Return code (Fortran only).



MPI_Allgatherv (C) MPI_ALLGATHERV (Fortran)

MPI_Allgatherv gathers data from all tasks and distributes it to all.

#include <mpi.h>
int MPI_Allgatherv(void *sendbuf, int sendcount,
                   MPI_Datatype sendtype, void *recvbuf,
                   int *recvcounts, int *displs,
                   MPI_Datatype recvtype, MPI_Comm comm)
use mpi
CALL MPI_ALLGATHERV(sendbuf, sendcount, sendtype, recvbuf,
                    recvcounts, displs, recvtype, comm, ierr)
<arbitrary> :: sendbuf(*), recvbuf(*)
INTEGER     :: sendcount, sendtype, recvcounts(*), displs(*), recvtype,
               comm, ierr
sendbuf IN Initial address of send buffer (choice).
sendcount IN Number of elements in send buffer (integer).
sendtype IN Datatype of send buffer elements (handle).
recvbuf OUT Address of receive buffer (choice).
recvcounts IN Integer array (of length group size) containing the number of elements that are received from each process.
displs IN Integer array (of length group size). Entry i specifies the displacement (relative to recvbuf) at which to place the incoming data from process i.
recvtype IN Datatype of receive buffer elements (handle).
comm IN Communicator (handle).
ierr OUT Return code (Fortran only).



MPI_Allreduce (C) MPI_ALLREDUCE (Fortran)

MPI_Allreduce combines values from all processes and distributes the result to all.

#include <mpi.h>
int MPI_Allreduce(void *sendbuf, void *recvbuf, int count,
                  MPI_Datatype datatype, MPI_Op op, MPI_Comm comm)
use mpi
CALL MPI_ALLREDUCE(sendbuf, recvbuf, count, datatype, op, comm,
                   ierr)
<arbitrary> :: sendbuf(*), recvbuf(*)
INTEGER     :: count, datatype, op, comm, ierr
sendbuf IN Initial address of send buffer (choice).
recvbuf OUT Initial address of receive buffer (choice).
count IN Number of elements in send buffer (integer).
datatype IN Datatype of elements of send buffer (handle).
op IN Operation (handle).
comm IN Communicator (handle).
ierr OUT Return code (Fortran only).



MPI_Alltoall (C) MPI_ALLTOALL (Fortran)

MPI_Alltoall sends data from all processes to all.

#include <mpi.h>
int MPI_Alltoall(void *sendbuf, int sendcount,
                 MPI_Datatype sendtype, void *recvbuf, int recvcnt,
                 MPI_Datatype recvtype, MPI_Comm comm)
use mpi
CALL MPI_ALLTOALL(sendbuf, sendcount, sendtype, recvbuf, recvcnt,
                  recvtype, comm, ierr)
<arbitrary> :: sendbuf(*), recvbuf(*)
INTEGER     :: sendcount, sendtype, recvcnt, recvtype, comm, ierr
sendbuf IN Initial address of send buffer (choice).
sendcount IN Number of elements to send to each process (integer).
sendtype IN Datatype of send buffer elements (handle).
recvbuf OUT Initial address of receive buffer (choice).
recvcnt IN Number of elements received from any process (integer).
recvtype IN Datatype of receive buffer elements (handle).
comm IN Communicator (handle).
ierr OUT Return code (Fortran only).



MPI_Alltoallv (C) MPI_ALLTOALLV (Fortran)

MPI_Alltoallv sends data from all processes to all using a displacement.

#include <mpi.h>
int MPI_Alltoallv(void *sendbuf, int *sendcnts, int *sdispls,
                  MPI_Datatype sendtype, void *recvbuf,
                  int *recvcnts, int *rdispls, MPI_Datatype recvtype,
                  MPI_Comm comm)
use mpi
CALL MPI_ALLTOALLV(sendbuf, sendcnts, sdispls, sendtype, recvbuf,
                   recvcnts, rdispls, recvtype, comm, ierr)
<arbitrary> :: sendbuf(*), recvbuf(*)
INTEGER     :: sendcnts(*), sdispls(*), sendtype, recvcnts(*), rdispls(*),
               recvtype, comm, ierr
sendbuf IN Initial address of send buffer (choice).
sendcnts IN Integer array equal to the group size specifying the number of elements to send to each processor.
sdispls IN Integer array (of length group size). Entry j specifies the displacement (relative to sendbuf from which to take the outgoing data destined for process j).
sendtype IN Datatype of send buffer elements (handle).
recvbuf OUT Address of receive buffer (choice).
recvcnts IN Integer array equal to the group size specifying the maximum number of elements that can be received from each processor.
rdispls IN Integer array (of length group size). Entry i specifies the displacement (relative to recvbuf at which to place the incoming data from process i).
recvtype IN Datatype of receive buffer elements (handle).
comm IN Communicator (handle).
ierr OUT Return code (Fortran only).



MPI_Alltoallw (C) MPI_ALLTOALLW (Fortran)

MPI_Alltoallw sends data from all processes to all using a displacement and datatype.

#include <mpi.h>
int MPI_Alltoallw(void *sendbuf, int *sendcounts, int *sdispls,
                  MPI_Datatype *sendtypes, void *recvbuf,
                  int *recvcounts, int *rdispls,
                  MPI_Datatype *recvtypes, MPI_Comm comm)
use mpi
CALL MPI_ALLTOALLW(sendbuf, sendcounts, sdispls, sendtypes,
                   recvbuf, recvcounts, rdispls, recvtypes, comm,
                   ierr)
<arbitrary> :: sendbuf(*), recvbuf(*)
INTEGER     :: sendcounts(*), sdispls(*), sendtypes(*), recvcounts(*),
               rdispls(*), recvtypes(*), comm, ierr
sendbuf IN Initial address of send buffer (choice).
sendcounts IN Integer array equal to the group size specifying the number of elements to send to each processor (integer).
sdispls IN Integer array (of length group size). Entry j specifies the displacement (relative to sendbuf from which to take the outgoing data destined for proce ss j).
sendtypes IN Array of datatypes (of length group size). Entry j specifies the type of data to send to process j (handle).
recvbuf OUT Address of receive buffer (choice).
recvcounts IN Integer array equal to the group size specifying the maximum number of elements that can be received from each processor (integer).
rdispls IN Integer array (of length group size). Entry i specifies the displacement (relative to recvbuf at which to place the incoming data from process i).
recvtypes IN Array of datatypes (of length group size). Entry i specifies the type of data received from process i (handle).
comm IN Communicator (handle).
ierr OUT Return code (Fortran only).



MPI_Barrier (C) MPI_BARRIER (Fortran)

MPI_Barrier blocks until all processes reach this function.

#include <mpi.h>
int MPI_Barrier(MPI_Comm comm)
use mpi
CALL MPI_BARRIER(comm, ierr)
INTEGER     :: comm, ierr
comm IN Communicator (handle).
ierr OUT Return code (Fortran only).



MPI_Bcast (C) MPI_BCAST (Fortran)

MPI_Bcast broadcasts a message from the process with rank root to all the other processes of the group.

#include <mpi.h>
int MPI_Bcast(void *buffer, int count, MPI_Datatype datatype,
              int root, MPI_Comm comm)
use mpi
CALL MPI_BCAST(buffer, count, datatype, root, comm, ierr)
<arbitrary> :: buffer(*)
INTEGER     :: count, datatype, root, comm, ierr
buffer INOUT Initial address of buffer (choice).
count IN Number of entries in buffer (integer).
datatype IN Datatype of buffer (handle).
root IN Rank of broadcast root (integer).
comm IN Communicator (handle).
ierr OUT Return code (Fortran only).



MPI_Exscan (C) MPI_EXSCAN (Fortran)

MPI_Exscan performs an exclusive scan operation (partial reductions).

#include <mpi.h>
int MPI_Exscan(void *sendbuf, void *recvbuf, int count,
               MPI_Datatype datatype, MPI_Op op, MPI_Comm comm)
use mpi
CALL MPI_EXSCAN(sendbuf, recvbuf, count, datatype, op,
                comm, ierr)
<arbitrary> :: sendbuf(*), recvbuf(*)
INTEGER     :: count, datatype, op, comm, ierr
sendbuf IN Initial address of send buffer (choice).
recvbuf OUT Initial address of receive buffer (choice).
count IN Number of elements in input buffer (integer).
datatype IN Data type of elements in input buffer (handle).
op IN Operation (handle).
comm IN Intra-Communicator (handle).
ierr OUT Return code (Fortran only).



MPI_Gather (C) MPI_GATHER (Fortran)

MPI_Gather gathers values from a group of processes.

#include <mpi.h>
int MPI_Gather(void *sendbuf, int sendcnt,
               MPI_Datatype sendtype, void *recvbuf,
               int recvcount, MPI_Datatype recvtype,
               int root, MPI_Comm comm)
use mpi
CALL MPI_GATHER(sendbuf, sendcount, sendtype, recvbuf,
                recvcount, recvtype, root, comm, ierr)
<arbitrary> :: sendbuf(*), recvbuf(*)
INTEGER     :: sendcount, sendtype, recvcount, recvtype, root
               comm, ierr
sendbuf IN Initial address of send buffer (choice).
sendcount IN Number of elements in send buffer (integer).
sendtype IN Datatype of send buffer elements (handle).
recvbuf OUT Address of receive buffer (choice, significant only at root).
recvcount IN Number of elements received from any process (integer).
recvtype IN Datatype of receive buffer elements (handle, significant only at root).
root IN Rank of receiving process (integer).
comm IN Communicator (handle).
ierr OUT Return code (Fortran only).



MPI_Gatherv (C) MPI_GATHERV (Fortran)

MPI_Gatherv gathers data from all processes in a group into a specific location.

#include <mpi.h>
int MPI_Gatherv(void *sendbuf, int sendcount,
                MPI_Datatype sendtype, void *recvbuf,
                int *recvcounts, int *displs,
                MPI_Datatype recvtype, int root,
                MPI_Comm comm)
use mpi
CALL MPI_GATHERV(sendbuf, sendcount, sendtype, recvbuf,
                 recvcounts, displs, recvtype, root, comm, ierr)
<arbitrary> :: sendbuf(*), recvbuf(*)
INTEGER     :: sendcount, sendtype, recvcounts(*), displs(*), recvtype,
               root, comm, ierr
sendbuf IN Initial address of send buffer (choice).
sendcount IN Number of elements in send buffer (integer).
sendtype IN Datatype of send buffer elements (handle).
recvbuf OUT Address of receive buffer (choice, significant only at root).
recvcounts IN Integer array (of length remote-group size) containing the number of elements that are received from each process (significant only at root).
displs IN Integer array (of length remote-group size). Entry i specifies the displacement (relative to recvbuf) at which to place the incoming data from process i (significant only at root).
recvtype IN Datatype of receive buffer elements (handle, significant only at root).
root IN Rank of receiving process (integer).
comm IN Communicator (handle).
ierr OUT Return code (Fortran only).



MPI_Iallgather (C) MPI_IALLGATHER (Fortran)

MPI_Iallgather gathers data from all tasks and distributes it to all (nonblocking version).

#include <mpi.h>
int MPI_Iallgather(void *sendbuf, int sendcount,
                   MPI_Datatype sendtype, void *recvbuf,
                   int recvcount, MPI_Datatype recvtype,
                   MPI_Comm comm, MPI_Request *request)
use mpi
CALL MPI_IALLGATHER(sendbuf, sendcount, sendtype, recvbuf,
                    recvcount, recvtype, comm, request, ierr)
<arbitrary> :: sendbuf(*), recvbuf(*)
INTEGER     :: sendcount, sendtype, recvcount, recvtype, comm, request
               ierr
sendbuf IN Initial address of send buffer (choice).
sendcount IN Number of elements in send buffer (integer).
sendtype IN Datatype of send buffer elements (handle).
recvbuf OUT Address of receive buffer (choice).
recvcount IN Number of elements received from any process (integer).
recvtype IN Datatype of receive buffer elements (handle).
comm IN Communicator (handle).
request OUT Communication request (handle).
ierr OUT Return code (Fortran only).



MPI_Iallgatherv (C) MPI_IALLGATHERV (Fortran)

MPI_Iallgatherv gathers data from all tasks and distributes it to all (nonblocking version).

#include <mpi.h>
int MPI_Iallgatherv(void *sendbuf, int sendcount,
                    MPI_Datatype sendtype, void *recvbuf,
                    int *recvcounts, int *displs,
                    MPI_Datatype recvtype, MPI_Comm comm,
                    MPI_Request *request)
use mpi
CALL MPI_IALLGATHERV(sendbuf, sendcount, sendtype, recvbuf,
                     recvcounts, displs, recvtype, comm, request, ierr)
<arbitrary> :: sendbuf(*), recvbuf(*)
INTEGER     :: sendcount, sendtype, recvcounts(*), displs(*), recvtype,
               comm, request, ierr
sendbuf IN Initial address of send buffer (choice).
sendcount IN Number of elements in send buffer (integer).
sendtype IN Datatype of send buffer elements (handle).
recvbuf OUT Address of receive buffer (choice).
recvcounts IN Integer array (of length group size) containing the number of elements that are received from each process.
displs IN Integer array (of length group size). Entry i specifies the displacement (relative to recvbuf) at which to place the incoming data from process i.
recvtype IN Datatype of receive buffer elements (handle).
comm IN Communicator (handle).
request OUT Communication request (handle).
ierr OUT Return code (Fortran only).



MPI_Iallreduce (C) MPI_IALLREDUCE (Fortran)

MPI_Iallreduce combines values from all processes and distributes the result to all (nonblocking version).

#include <mpi.h>
int MPI_Iallreduce(void *sendbuf, void *recvbuf, int count,
                   MPI_Datatype datatype, MPI_Op op, MPI_Comm comm,
                   MPI_Request *request)
use mpi
CALL MPI_IALLREDUCE(sendbuf, recvbuf, count, datatype, op, comm,
                    request, ierr)
<arbitrary> :: sendbuf(*), recvbuf(*)
INTEGER     :: count, datatype, op, comm, request, ierr
sendbuf IN Initial address of send buffer (choice).
recvbuf OUT Initial address of receive buffer (choice).
count IN Number of elements in send buffer (integer).
datatype IN Datatype of elements of send buffer (handle).
op IN Operation (handle).
comm IN Communicator (handle).
request OUT Communication request (handle).
ierr OUT Return code (Fortran only).



MPI_Ialltoall (C) MPI_IALLTOALL (Fortran)

MPI_Ialltoall sends data from all processes to all (nonblocking version).

#include <mpi.h>
int MPI_Ialltoall(void *sendbuf, int sendcount,
                  MPI_Datatype sendtype, void *recvbuf, int recvcnt,
                  MPI_Datatype recvtype, MPI_Comm comm,
                  MPI_Request *request)
use mpi
CALL MPI_IALLTOALL(sendbuf, sendcount, sendtype, recvbuf, recvcnt,
                   recvtype, comm, request, ierr)
<arbitrary> :: sendbuf(*), recvbuf(*)
INTEGER     :: sendcount, sendtype, recvcnt, recvtype, comm, request, ierr
sendbuf IN Initial address of send buffer (choice).
sendcount IN Number of elements to send to each process (integer).
sendtype IN Datatype of send buffer elements (handle).
recvbuf OUT Initial address of receive buffer (choice).
recvcnt IN Number of elements received from any process (integer).
recvtype IN Datatype of receive buffer elements (handle).
comm IN Communicator (handle).
request OUT Communication request (handle).
ierr OUT Return code (Fortran only).



MPI_Ialltoallv (C) MPI_IALLTOALLV (Fortran)

MPI_Ialltoallv sends data from all processes to all using a displacement (nonblocking version).

#include <mpi.h>
int MPI_Ialltoallv(void *sendbuf, int *sendcnts, int *sdispls,
                   MPI_Datatype sendtype, void *recvbuf,
                   int *recvcnts, int *rdispls, MPI_Datatype recvtype,
                   MPI_Comm comm, MPI_Request *request)
use mpi
CALL MPI_IALLTOALLV(sendbuf, sendcnts, sdispls, sendtype, recvbuf,
                    recvcnts, rdispls, recvtype, comm, request, ierr)
<arbitrary> :: sendbuf(*), recvbuf(*)
INTEGER     :: sendcnts(*), sdispls(*), sendtype, recvcnts(*), rdispls(*),
               recvtype, comm, request, ierr
sendbuf IN Initial address of send buffer (choice).
sendcnts IN Integer array equal to the group size specifying the number of elements to send to each processor.
sdispls IN Integer array (of length group size). Entry j specifies the displacement (relative to sendbuf from which to take the outgoing data destined for process j).
sendtype IN Datatype of send buffer elements (handle).
recvbuf OUT Address of receive buffer (choice).
recvcnts IN Integer array equal to the group size specifying the maximum number of elements that can be received from each processor.
rdispls IN Integer array (of length group size). Entry i specifies the displacement (relative to recvbuf at which to place the incoming data from process i).
recvtype IN Datatype of receive buffer elements (handle).
comm IN Communicator (handle).
request OUT Communication request (handle).
ierr OUT Return code (Fortran only).



MPI_Ialltoallw (C) MPI_IALLTOALLW (Fortran)

MPI_Ialltoallw sends data from all processes to all using a displacement and datatype (nonblocking version).

#include <mpi.h>
int MPI_Ialltoallw(void *sendbuf, int *sendcounts, int *sdispls,
                   MPI_Datatype *sendtypes, void *recvbuf,
                   int *recvcounts, int *rdispls,
                   MPI_Datatype *recvtypes, MPI_Comm comm)
use mpi
CALL MPI_IALLTOALLW(sendbuf, sendcounts, sdispls, sendtypes,
                    recvbuf, recvcounts, rdispls, recvtypes, comm,
                    request, ierr)
<arbitrary> :: sendbuf(*), recvbuf(*)
INTEGER     :: sendcounts(*), sdispls(*), sendtypes(*), recvcounts(*),
               rdispls(*), recvtypes(*), comm, request, ierr
sendbuf IN Initial address of send buffer (choice).
sendcounts IN Integer array equal to the group size specifying the number of elements to send to each processor (integer).
sdispls IN Integer array (of length group size). Entry j specifies the displacement (relative to sendbuf from which to take the outgoing data destined for proce ss j).
sendtypes IN Array of datatypes (of length group size). Entry j specifies the type of data to send to process j (handle).
recvbuf OUT Address of receive buffer (choice).
recvcounts IN Integer array equal to the group size specifying the maximum number of elements that can be received from each processor (integer).
rdispls IN Integer array (of length group size). Entry i specifies the displacement (relative to recvbuf at which to place the incoming data from process i).
recvtypes IN Array of datatypes (of length group size). Entry i specifies the type of data received from process i (handle).
comm IN Communicator (handle).
request OUT Communication request (handle).
ierr OUT Return code (Fortran only).



MPI_Ibarrier (C) MPI_IBARRIER (Fortran)

MPI_Ibarrier blocks until all processes reach this function (nonblocking version).

#include <mpi.h>
int MPI_Ibarrier(MPI_Comm comm, MPI_Request *request)
use mpi
CALL MPI_IBARRIER(comm, request, ierr)
INTEGER     :: comm, request, ierr
comm IN Communicator (handle).
request OUT Communication request (handle).
ierr OUT Return code (Fortran only).



MPI_Ibcast (C) MPI_IBCAST (Fortran)

MPI_Ibcast broadcasts a message from the process with rank root to all the other processes of the group (nonblocking version).

#include <mpi.h>
int MPI_Ibcast(void *buffer, int count, MPI_Datatype datatype,
               int root, MPI_Comm comm, MPI_Request *request)
use mpi
CALL MPI_IBCAST(buffer, count, datatype, root, comm,
                request, ierr)
<arbitrary> :: buffer(*)
INTEGER     :: count, datatype, root, comm, request, ierr
buffer INOUT Initial address of buffer (choice).
count IN Number of entries in buffer (integer).
datatype IN Datatype of buffer (handle).
root IN Rank of broadcast root (integer).
comm IN Communicator (handle).
request OUT Communication request (handle).
ierr OUT Return code (Fortran only).



MPI_Iexscan (C) MPI_IEXSCAN (Fortran)

MPI_Iexscan performs an exclusive scan operation (partial reductions, nonblocking version).

#include <mpi.h>
int MPI_Iexscan(void *sendbuf, void *recvbuf, int count,
                MPI_Datatype datatype, MPI_Op op, MPI_Comm comm,
                MPI_Request *request)
use mpi
CALL MPI_IEXSCAN(sendbuf, recvbuf, count, datatype, op,
                 comm, request, ierr)
<arbitrary> :: sendbuf(*), recvbuf(*)
INTEGER     :: count, datatype, op, comm, request, ierr
sendbuf IN Initial address of send buffer (choice).
recvbuf OUT Initial address of receive buffer (choice).
count IN Number of elements in input buffer (integer).
datatype IN Data type of elements in input buffer (handle).
op IN Operation (handle).
comm IN Intra-Communicator (handle).
request OUT Communication request (handle).
ierr OUT Return code (Fortran only).



MPI_Igather (C) MPI_IGATHER (Fortran)

MPI_Igather gathers values from a group of processes (nonblocking version).

#include <mpi.h>
int MPI_Igather(void *sendbuf, int sendcnt,
                MPI_Datatype sendtype, void *recvbuf,
                int recvcount, MPI_Datatype recvtype,
                int root, MPI_Comm comm)
use mpi
CALL MPI_IGATHER(sendbuf, sendcount, sendtype, recvbuf,
                 recvcount, recvtype, root, comm, request, ierr)
<arbitrary> :: sendbuf(*), recvbuf(*)
INTEGER     :: sendcount, sendtype, recvcount, recvtype, root
               comm, request, ierr
sendbuf IN Initial address of send buffer (choice).
sendcount IN Number of elements in send buffer (integer).
sendtype IN Datatype of send buffer elements (handle).
recvbuf OUT Address of receive buffer (choice, significant only at root).
recvcount IN Number of elements received from any process (integer).
recvtype IN Datatype of receive buffer elements (handle, significant only at root).
root IN Rank of receiving process (integer).
comm IN Communicator (handle).
request OUT Communication request (handle).
ierr OUT Return code (Fortran only).



MPI_Igatherv (C) MPI_IGATHERV (Fortran)

MPI_Igatherv gathers data from all processes in a group into a specific location (nonblocking version).

#include <mpi.h>
int MPI_Igatherv(void *sendbuf, int sendcount,
                 MPI_Datatype sendtype, void *recvbuf,
                 int *recvcounts, int *displs,
                 MPI_Datatype recvtype, int root,
                 MPI_Comm comm, MPI_Request *request)
use mpi
CALL MPI_IGATHERV(sendbuf, sendcount, sendtype, recvbuf,
                  recvcounts, displs, recvtype, root, comm,
                  request, ierr)
<arbitrary> :: sendbuf(*), recvbuf(*)
INTEGER     :: sendcount, sendtype, recvcounts(*), displs(*), recvtype,
               root, comm, request, ierr
sendbuf IN Initial address of send buffer (choice).
sendcount IN Number of elements in send buffer (integer).
sendtype IN Datatype of send buffer elements (handle).
recvbuf OUT Address of receive buffer (choice, significant only at root).
recvcounts IN Integer array (of length remote-group size) containing the number of elements that are received from each process (significant only at root).
displs IN Integer array (of length remote-group size). Entry i specifies the displacement (relative to recvbuf) at which to place the incoming data from process i (significant only at root).
recvtype IN Datatype of receive buffer elements (handle, significant only at root).
root IN Rank of receiving process (integer).
comm IN Communicator (handle).
request OUT Communication request (handle).
ierr OUT Return code (Fortran only).



MPI_Ireduce (C) MPI_IREDUCE (Fortran)

MPI_Ireduce reduces values on all processes to a single value (nonblocking version).

#include <mpi.h>
int MPI_Ireduce(void *sendbuf, void *recvbuf, int count,
                MPI_Datatype datatype, MPI_Op op, int root,
                MPI_Comm comm, MPI_Request *request)
use mpi
CALL MPI_IREDUCE(sendbuf, recvbuf, count, datatype, op,
                 root, comm, request, ierr)
<arbitrary> :: sendbuf(*), recvbuf(*)
INTEGER     :: count, datatype, op, root, comm, request, ierr
sendbuf IN Initial address of send buffer (choice).
recvbuf OUT Initial address of receive buffer (choice, significant only at root).
count IN Number of elements in send buffer (integer).
datatype IN Datatype of each send buffer element (handle).
op IN Reduce operation (handle).
root IN Rank of root process (integer).
comm IN Communicator (handle).
request OUT Communication request (handle).
ierr OUT Return code (Fortran only).



MPI_Ireduce_scatter (C) MPI_IREDUCE_SCATTER (Fortran)

MPI_Ireduce_scatter combines values and scatters the results (nonblocking version).

#include <mpi.h>
int MPI_Ireduce_scatter(void *sendbuf, void *recvbuf,
                        int *recvcounts, MPI_Datatype datatype,
                        MPI_Op op, MPI_Comm comm,
                        MPI_Request *request)
use mpi
CALL MPI_IREDUCE_SCATTER(sendbuf, recvbuf, recvcounts, datatype,
                         op, comm, request, ierr)
<arbitrary> :: sendbuf(*), recvbuf(*)
INTEGER     :: recvcounts(*), datatype, op, comm, request, ierr
sendbuf IN Initial address of send buffer (choice).
recvbuf OUT Initial address of receive buffer (choice).
recvcounts IN Number of elements distributed to each process. Must be identical on all calling processes (array of integers).
datatype IN Datatype of each send buffer element (handle).
op IN Operation (handle).
comm IN Communicator (handle).
request OUT Communication request (handle).
ierr OUT Return code (Fortran only).



MPI_Ireduce_scatter_block (C) MPI_IREDUCE_SCATTER_BLOCK (Fortran)

MPI_Ireduce_scatter_block combines values and scatters the results with equal blocks (nonblocking version).

#include <mpi.h>
int MPI_Ireduce_scatter_block(void *sendbuf, void *recvbuf,
                              int recvcount, MPI_Datatype datatype,
                              MPI_Op op, MPI_Comm comm,
                              MPI_Request *request)
use mpi
CALL MPI_IREDUCE_SCATTER_BLOCK(sendbuf, recvbuf, recvcount, datatype,
                               op, comm, request, ierr)
<arbitrary> :: sendbuf(*), recvbuf(*)
INTEGER     :: recvcounts, datatype, op, comm, request, ierr
sendbuf IN Initial address of send buffer (choice).
recvbuf OUT Initial address of receive buffer (choice).
recvcount IN Element count per block (non-negative integer).
datatype IN Datatype of each send buffer element (handle).
op IN Operation (handle).
comm IN Communicator (handle).
request OUT Communication request (handle).
ierr OUT Return code (Fortran only).



MPI_Iscan (C) MPI_ISCAN (Fortran)

MPI_Iscan computes the scan (partial reductions) of data on a collection of processes (nonblocking version).

#include <mpi.h>
int MPI_Iscan(void *sendbuf, void *recvbuf, int count,
              MPI_Datatype datatype, MPI_Op op, MPI_Comm comm,
              MPI_Request *request)
use mpi
CALL MPI_ISCAN(sendbuf, recvbuf, count, datatype, op, comm,
               request, ierr)
<arbitrary> :: sendbuf(*), recvbuf(*)
INTEGER     :: count, datatype, op, comm, request, ierr
sendbuf IN Initial address of send buffer (choice).
recvbuf OUT Initial address of receive buffer (choice).
count IN Number of elements in send buffer (integer).
datatype IN Datatype of each send buffer element (handle).
op IN Operation (handle).
comm IN Communicator (handle).
request OUT Communication request (handle).
ierr OUT Return code (Fortran only).



MPI_Iscatter (C) MPI_ISCATTER (Fortran)

MPI_Iscatter scatters data from one task to all other tasks in a group (nonblocking version).

#include <mpi.h>
int MPI_Iscatter(void *sendbuf, int sendcnt,
                MPI_Datatype sendtype, void *recvbuf,
                int recvcnt, MPI_Datatype recvtype,
                int root, MPI_Comm comm,
                MPI_Request *request)
use mpi
CALL MPI_ISCATTER(sendbuf, sendcnt, sendtype, recvbuf, recvcnt,
                  recvtype, root, comm, request, ierr)
<arbitrary> :: sendbuf(*), recvbuf(*)
INTEGER     :: sendcnt, sendtype, recvcnt, recvtype, root, comm,
               request, ierr
sendbuf IN Initial address of send buffer (choice, significant only at root).
sendcnt IN Number of elements sent to each process (integer, significant only at root).
sendtype IN Datatype of each send buffer element (handle, significant only at root).
recvbuf OUT Initial address of receive buffer (choice).
recvcnt IN Number of elements in receive buffer (integer).
recvtype IN Datatype of each receive buffer element (handle).
root IN Rank of sending process (integer).
comm IN Communicator (handle).
request OUT Communication request (handle).
ierr OUT Return code (Fortran only).



MPI_Iscatterv (C) MPI_ISCATTERV (Fortran)

MPI_Iscatterv scatters a buffer in parts to all tasks in a group (nonblocking version).

#include <mpi.h>
int MPI_Iscatterv(void *sendbuf, int *sendcnts, int *displs,
                  MPI_Datatype sendtype, void *recvbuf,
                  int recvcnt, MPI_Datatype recvtype,
                  int root, MPI_Comm comm, MPI_Request *request)
use mpi
CALL MPI_ISCATTERV(sendbuf, sendcnts, displs, sendtype, recvbuf,
                   recvcnt, recvtype, root, comm, request, ierr)
<arbitrary> :: sendbuf(*), recvbuf(*)
INTEGER     :: sendcnts(*), displs(*), sendtype, recvcnt, recvtype,
               root, comm, request, ierr
sendbuf IN Initial address of send buffer (choice, significant only at root).
sendcnts IN Integer array (of length group size) specifying the number of elements to send to each processor.
displs IN Integer array (of length group size). Entry i specifies the displacement relative to sendbuf from which to take the outgoing data to process i.
sendtype IN Datatype of each send buffer element (handle, significant only at root).
recvbuf OUT Initial address of receive buffer (choice).
recvcnt IN Number of elements in receive buffer (integer).
recvtype IN Datatype of each receive buffer element (handle).
root IN Rank of sending process (integer).
comm IN Communicator (handle).
request OUT Communication request (handle).
ierr OUT Return code (Fortran only).



MPI_Op_commutative (C) MPI_OP_COMMUTATIVE (Fortran)

MPI_Op_commutative queries for a commutativity of a reduction operation.

#include <mpi.h>
int MPI_Op_commutative(MPI_Op op, int *commute)
use mpi
CALL MPI_OP_COMMUTATIVE(op, commute, ierr)
LOGICAL     :: commute
INTEGER     :: op, ierr
op IN Operation (handle).
commute OUT True if commutative, otherwise false (logical).
ierr OUT Return code (Fortran only).



MPI_Op_create (C) MPI_OP_CREATE (Fortran)

MPI_Op_create creates a user-defined combination function handle.

#include <mpi.h>
int MPI_Op_create(MPI_User_function *function, int commute,
                  MPI_Op *op)

typedef void MPI_User_function(void *invec, void *inoutvec, int *len,
                               MPI_Datatype *datatype)
use mpi
CALL MPI_OP_CREATE(user_function, commute, op, ierr)
EXTERNAL    :: user_function
LOGICAL     :: commute
INTEGER     :: op, ierr

FUNCTION user_function(invec(*), inoutvec(*), len, type)
<arbitrary> :: invec(len), inoutvec(len)
INTEGER     :: len, type
function IN User-defined function (function).
commute IN True if commutative, otherwise false (logical).
op OUT Operation (handle).
ierr OUT Return code (Fortran only).



MPI_Op_free (C) MPI_OP_FREE (Fortran)

MPI_Op_free frees a user-defined combination function handle.

#include <mpi.h>
int MPI_Op_free(MPI_Op *op)
use mpi
CALL MPI_OP_FREE(op, ierr)
INTEGER     :: op, ierr
op INOUT Operation (handle).
ierr OUT Return code (Fortran only).



MPI_Reduce (C) MPI_REDUCE (Fortran)

MPI_Reduce reduces values on all processes to a single value.

#include <mpi.h>
int MPI_Reduce(void *sendbuf, void *recvbuf, int count,
               MPI_Datatype datatype, MPI_Op op, int root,
               MPI_Comm comm)
use mpi
CALL MPI_REDUCE(sendbuf, recvbuf, count, datatype, op,
                root, comm, ierr)
<arbitrary> :: sendbuf(*), recvbuf(*)
INTEGER     :: count, datatype, op, root, comm, ierr
sendbuf IN Initial address of send buffer (choice).
recvbuf OUT Initial address of receive buffer (choice, significant only at root).
count IN Number of elements in send buffer (integer).
datatype IN Datatype of each send buffer element (handle).
op IN Reduce operation (handle).
root IN Rank of root process (integer).
comm IN Communicator (handle).
ierr OUT Return code (Fortran only).



MPI_Reduce_local (C) MPI_REDUCE_LOCAL (Fortran)

MPI_Reduce_local reduces values on local process to a single value.

#include <mpi.h>
int MPI_Reduce_local(void *inbuf, void *inoutbuf, int count,
                     MPI_Datatype datatype, MPI_Op op)
use mpi
CALL MPI_REDUCE_LOCAL(inbuf, inoutbuf, count, datatype, op, ierr)
<arbitrary> :: inbuf(*), inoutbuf(*)
INTEGER     :: count, datatype, op, ierr
inbuf IN input buffer (choice).
inoutbuf INOUT Combined input and output buffer (choice).
count IN Number of elements in inbuf and inoutbuf buffers (non-negative integer).
datatype IN Datatype of elements of inbuf and inoutbuf buffers (handle).
op IN Operation (handle).
ierr OUT Return code (Fortran only).



MPI_Reduce_scatter (C) MPI_REDUCE_SCATTER (Fortran)

MPI_Reduce_scatter combines values and scatters the results.

#include <mpi.h>
int MPI_Reduce_scatter(void *sendbuf, void *recvbuf,
                       int *recvcounts, MPI_Datatype datatype,
                       MPI_Op op, MPI_Comm comm)
use mpi
CALL MPI_REDUCE_SCATTER(sendbuf, recvbuf, recvcounts, datatype,
                        op, comm, ierr)
<arbitrary> :: sendbuf(*), recvbuf(*)
INTEGER     :: recvcounts(*), datatype, op, comm, ierr
sendbuf IN Initial address of send buffer (choice).
recvbuf OUT Initial address of receive buffer (choice).
recvcounts IN Number of elements distributed to each process. Must be identical on all calling processes (array of integers).
datatype IN Datatype of each send buffer element (handle).
op IN Operation (handle).
comm IN Communicator (handle).
ierr OUT Return code (Fortran only).



MPI_Reduce_scatter_block (C) MPI_REDUCE_SCATTER_BLOCK (Fortran)

MPI_Reduce_scatter_block combines values and scatters the results with equal blocks.

#include <mpi.h>
int MPI_Reduce_scatter_block(void *sendbuf, void *recvbuf,
                             int recvcount, MPI_Datatype datatype,
                             MPI_Op op, MPI_Comm comm)
use mpi
CALL MPI_REDUCE_SCATTER_BLOCK(sendbuf, recvbuf, recvcount, datatype,
                              op, comm, ierr)
<arbitrary> :: sendbuf(*), recvbuf(*)
INTEGER     :: recvcounts, datatype, op, comm, ierr
sendbuf IN Initial address of send buffer (choice).
recvbuf OUT Initial address of receive buffer (choice).
recvcount IN Element count per block (non-negative integer).
datatype IN Datatype of each send buffer element (handle).
op IN Operation (handle).
comm IN Communicator (handle).
ierr OUT Return code (Fortran only).



MPI_Scan (C) MPI_SCAN (Fortran)

MPI_Scan computes the scan (partial reductions) of data on a collection of processes.

#include <mpi.h>
int MPI_Scan(void *sendbuf, void *recvbuf, int count,
             MPI_Datatype datatype, MPI_Op op, MPI_Comm comm)
use mpi
CALL MPI_SCAN(sendbuf, recvbuf, count, datatype, op, comm, ierr)
<arbitrary> :: sendbuf(*), recvbuf(*)
INTEGER     :: count, datatype, op, comm, ierr
sendbuf IN Initial address of send buffer (choice).
recvbuf OUT Initial address of receive buffer (choice).
count IN Number of elements in send buffer (integer).
datatype IN Datatype of each send buffer element (handle).
op IN Operation (handle).
comm IN Communicator (handle).
ierr OUT Return code (Fortran only).



MPI_Scatter (C) MPI_SCATTER (Fortran)

MPI_Scatter scatters data from one task to all other tasks in a group.

#include <mpi.h>
int MPI_Scatter(void *sendbuf, int sendcnt,
                MPI_Datatype sendtype, void *recvbuf,
                int recvcnt, MPI_Datatype recvtype,
                int root, MPI_Comm comm)
use mpi
CALL MPI_SCATTER(sendbuf, sendcnt, sendtype, recvbuf, recvcnt,
                 recvtype, root, comm, ierr)
<arbitrary> :: sendbuf(*), recvbuf(*)
INTEGER     :: sendcnt, sendtype, recvcnt, recvtype, root, comm,
               ierr
sendbuf IN Initial address of send buffer (choice, significant only at root).
sendcnt IN Number of elements sent to each process (integer, significant only at root).
sendtype IN Datatype of each send buffer element (handle, significant only at root).
recvbuf OUT Initial address of receive buffer (choice).
recvcnt IN Number of elements in receive buffer (integer).
recvtype IN Datatype of each receive buffer element (handle).
root IN Rank of sending process (integer).
comm IN Communicator (handle).
ierr OUT Return code (Fortran only).



MPI_Scatterv (C) MPI_SCATTERV (Fortran)

MPI_Scatterv scatters a buffer in parts to all tasks in a group.

#include <mpi.h>
int MPI_Scatterv(void *sendbuf, int *sendcnts, int *displs,
                 MPI_Datatype sendtype, void *recvbuf,
                 int recvcnt, MPI_Datatype recvtype,
                 int root, MPI_Comm comm)
use mpi
CALL MPI_SCATTERV(sendbuf, sendcnts, displs, sendtype, recvbuf,
                  recvcnt, recvtype, root, comm, ierr)
<arbitrary> :: sendbuf(*), recvbuf(*)
INTEGER     :: sendcnts(*), displs(*), sendtype, recvcnt, recvtype,
               root, comm, ierr
sendbuf IN Initial address of send buffer (choice, significant only at root).
sendcnts IN Integer array (of length group size) specifying the number of elements to send to each processor.
displs IN Integer array (of length group size). Entry i specifies the displacement relative to sendbuf from which to take the outgoing data to process i.
sendtype IN Datatype of each send buffer element (handle, significant only at root).
recvbuf OUT Initial address of receive buffer (choice).
recvcnt IN Number of elements in receive buffer (integer).
recvtype IN Datatype of each receive buffer element (handle).
root IN Rank of sending process (integer).
comm IN Communicator (handle).
ierr OUT Return code (Fortran only).


4.4   Groups, Contexts, Communicators, and Caching

MPI_Comm_compare/MPI_COMM_COMPARE Compares communicator
MPI_Comm_create/MPI_COMM_CREATE Creates a new communicator (subject of a communicator)
MPI_Comm_create_group/MPI_COMM_CREATE_GROUP Creates a new communicator (subject of a communicator)
MPI_Comm_create_keyval/MPI_COMM_CREATE_KEYVAL Creates a new attribute key
MPI_Comm_delete_attr/MPI_COMM_DELETE_ATTR Deletes an attribute value associated with a key
MPI_Comm_dup/MPI_COMM_DUP Duplicates an existing communicator
MPI_Comm_dup_with_info/MPI_COMM_DUP_WITH_INFO Duplicates an existing communicator without associated info hints
MPI_Comm_free/MPI_COMM_FREE Frees a communicator
MPI_Comm_free_keyval/MPI_COMM_FREE_KEYVAL Frees an attribute key for a communicator cache attribute
MPI_Comm_get_attr/MPI_COMM_GET_ATTR Returns an attribute value by key
MPI_Comm_get_info/MPI_COMM_GET_INFO Returns a new info object containing the hints of the communicator
MPI_Comm_get_name/MPI_COMM_GET_NAME Returns the name associated with a communicator
MPI_Comm_group/MPI_COMM_GROUP Converts a communicator to a group
MPI_Comm_idup/MPI_COMM_IDUP A nonblocking variant of MPI_COMM_DUP
MPI_Comm_rank/MPI_COMM_RANK Determines rank within a communicator
MPI_Comm_remote_group/MPI_COMM_REMOTE_GROUP Remote group of inter-communicator
MPI_Comm_remote_size/MPI_COMM_REMOTE_SIZE Size of a remote group of inter-communicator
MPI_Comm_set_attr/MPI_COMM_SET_ATTR Stores an attribute value associated with a key
MPI_Comm_set_info/MPI_COMM_SET_INFO Sets new values for the hints of the communicator
MPI_Comm_set_name/MPI_COMM_SET_NAME Associates a name with a communicator
MPI_Comm_size/MPI_COMM_SIZE Size of communicator
MPI_Comm_split/MPI_COMM_SPLIT Split of communicator
MPI_Comm_split_type/MPI_COMM_SPLIT_TYPE Partitions the group associated with comm into disjoint subgroups
MPI_Comm_test_inter/MPI_COMM_TEST_INTER Distinguishes between an intra-communicator and inter-communicator
MPI_Group_compare/MPI_GROUP_COMPARE Compares members within a group
MPI_Group_difference/MPI_GROUP_DIFFERENCE Differentiates group members
MPI_Group_excl/MPI_GROUP_EXCL Generates a new group (subset of group)
MPI_Group_free/MPI_GROUP_FREE Frees a group
MPI_Group_incl/MPI_GROUP_INCL Generates a new group (subset of group)
MPI_Group_intersection/MPI_GROUP_INTERSECTION Intersection of group member
MPI_Group_range_excl/MPI_GROUP_RANGE_EXCL Generates a new group (subset of group)
MPI_Group_range_incl/MPI_GROUP_RANGE_INCL Generates a new group (subset of group)
MPI_Group_rank/MPI_GROUP_RANK Rank within a group
MPI_Group_size/MPI_GROUP_SIZE Size of a group
MPI_Group_translate_ranks/MPI_GROUP_TRANSLATE_RANKS Queries ranks within different groups
MPI_Group_union/MPI_GROUP_UNION Union of group member
MPI_Intercomm_create/MPI_INTERCOMM_CREATE Generates an inter-communicator
MPI_Intercomm_merge/MPI_INTERCOMM_MERGE Generates an inter-communicator by merging the local and remote groups of inter-communicator
MPI_Type_create_keyval/MPI_TYPE_CREATE_KEYVAL Creates a new attribute key
MPI_Type_delete_attr/MPI_TYPE_DELETE_ATTR Deletes an attribute value of a datatype associated with a key
MPI_Type_free_keyval/MPI_TYPE_FREE_KEYVAL Frees an attribute key for a datatype cache attribute
MPI_Type_get_attr/MPI_TYPE_GET_ATTR Returns an attribute value by key
MPI_Type_get_name/MPI_TYPE_GET_NAME Returns the name associated with a datatype
MPI_Type_set_attr/MPI_TYPE_SET_ATTR Stores an attribute value associated with a key for a datatype
MPI_Type_set_name/MPI_TYPE_SET_NAME Associates a name with a datatype
MPI_Win_create_keyval/MPI_WIN_CREATE_KEYVAL Creates a new attribute key
MPI_Win_delete_attr/MPI_WIN_DELETE_ATTR Deletes a window attribute
MPI_Win_free_keyval/MPI_WIN_FREE_KEYVAL Frees a window attribute key value
MPI_Win_get_attr/MPI_WIN_GET_ATTR Returns an attribute value by key
MPI_Win_get_name/MPI_WIN_GET_NAME Returns the name associated with a window
MPI_Win_set_attr/MPI_WIN_SET_ATTR Associates an attribute value with a window
MPI_Win_set_name/MPI_WIN_SET_NAME Associates a name with a window



MPI_Comm_compare (C) MPI_COMM_COMPARE (Fortran)

MPI_Comm_compare compares two communicators.

#include <mpi.h>
int MPI_Comm_compare(MPI_Comm comm1, MPI_Comm comm2,
                     int *result)
use mpi
CALL MPI_COMM_COMPARE(comm1, comm2, result, ierr)
INTEGER     :: comm1, comm2, result, ierr
comm1 IN Communicator 1 (handle).
comm2 IN Communicator 2 (handle).
result OUT Integer which is MPI_IDENT if the contexts and groups are the same, MPI_CONGRUENT if different contexts but identical groups, MPI_SIMILAR if different contexts but similar groups, and MPI_UNEQUAL otherwise.
ierr OUT Return code (Fortran only).



MPI_Comm_create (C) MPI_COMM_CREATE (Fortran)

MPI_Comm_create creates a new communicator.

#include <mpi.h>
int MPI_Comm_create(MPI_Comm comm, MPI_Group group,
                    MPI_Comm *comm_out)
use mpi
CALL MPI_COMM_CREATE(comm, group, comm_out, ierr)
INTEGER     :: comm, group, comm_out, ierr
comm IN Communicator (handle).
group IN Group, which is a subset of the group of comm (handle).
comm_out OUT New communicator (handle).
ierr OUT Return code (Fortran only).



MPI_Comm_create_group (C) MPI_COMM_CREATE_GROUP (Fortran)

MPI_Comm_create_group creates a new communicator.

#include <mpi.h>
int MPI_Comm_create_group(MPI_Comm comm, MPI_Group group, int tag,
                          MPI_Comm *newcomm)
use mpi
CALL MPI_COMM_CREATE_GROUP(comm, group, tag, newcomm, ierr)
INTEGER     :: comm, group, tag, newcomm, ierr
comm IN Communicator (handle).
group IN Group, which is a subset of the group of comm (handle).
tag IN Tag (integer).
newcomm OUT New communicator (handle).
ierr OUT Return code (Fortran only).



MPI_Comm_create_keyval (C) MPI_COMM_CREATE_KEYVAL (Fortran)

MPI_Comm_create_keyval creates a new attribute key.

#include <mpi.h>
int MPI_Comm_create_keyval(MPI_Comm_copy_attr_function *comm_copy_attr_fn,
                           MPI_Comm_delete_attr_function *comm_delete_attr_fn,
                           int *comm_keyval, void *extra_state)
use mpi
CALL MPI_COMM_CREATE_KEYVAL(comm_copy_attr_fn, comm_delete_attr_fn,
                            comm_keyval, extra_state, ierr)
EXTERNAL                       :: comm_copy_attr_fn, comm_delete_attr_fn,
INTEGER                        :: comm_keyval, ierr
INTEGER(KIND=MPI_ADDRESS_KIND) :: extra_state
comm_copy_attr_fn IN Copy callback function for comm_keyval (function).
comm_delete_attr_fn IN Delete callback function for comm_keyval (function).
comm_keyval OUT Key value for future access (integer).
extra_state IN Extra state for callback functions.
ierr OUT Return code (Fortran only).



MPI_Comm_delete_attr (C) MPI_COMM_DELETE_ATTR (Fortran)

MPI_Comm_delete_attr deletes an attribute value associated with a key.

#include <mpi.h>
int MPI_Comm_delete_attr(MPI_Comm comm, int comm_keyval)
use mpi
CALL MPI_COMM_DELETE_ATTR(comm, comm_keyval, ierr)
INTEGER     :: comm, comm_keyval, ierr
comm INOUT Communicator from which the attribute is deleted (handle).
comm_keyval IN Key value (integer).
ierr OUT Return code (Fortran only).



MPI_Comm_dup (C) MPI_COMM_DUP (Fortran)

MPI_Comm_dup duplicates an existing communicator.

#include <mpi.h>
int MPI_Comm_dup(MPI_Comm comm, MPI_Comm *comm_out)
use mpi
CALL MPI_COMM_DUP(comm, comm_out, ierr)
INTEGER     :: comm, comm_out, ierr
comm IN Communicator (handle).
comm_out OUT Copy of comm (handle).
ierr OUT Return code (Fortran only).



MPI_Comm_dup_with_info (C) MPI_COMM_DUP_WITH_INFO (Fortran)

MPI_Comm_dup_with_info duplicates an existing communicator without associated info hints.

#include <mpi.h>
int MPI_Comm_dup_with_info(MPI_Comm comm, MPI_Info info, MPI_Comm *newcomm)
use mpi
CALL MPI_COMM_DUP_WITH_INFO(comm, info, newcomm, ierr)
INTEGER     :: comm, info, newcomm, ierr
comm IN Communicator (handle).
info IN Info object (handle).
newcomm OUT Copy of comm (handle).
ierr OUT Return code (Fortran only).



MPI_Comm_free (C) MPI_COMM_FREE (Fortran)

MPI_Comm_free frees a communicator object.

#include <mpi.h>
int MPI_Comm_free(MPI_Comm *commp)
use mpi
CALL MPI_COMM_FREE(commp, ierr)
INTEGER     :: commp, ierr
commp INOUT Communicator to be freed (handle).
ierr OUT Return code (Fortran only).



MPI_Comm_free_keyval (C) MPI_COMM_FREE_KEYVAL (Fortran)

MPI_Comm_free_keyval frees an attribute key for a communicator cache attribute.

#include <mpi.h>
int MPI_Comm_free_keyval(int *comm_keyval)
use mpi
CALL MPI_COMM_FREE_KEYVAL(comm_keyval, ierr)
INTEGER     :: comm_keyval, ierr
comm_keyval INOUT Key value (integer).
ierr OUT Return code (Fortran only).



MPI_Comm_get_attr (C) MPI_COMM_GET_ATTR (Fortran)

MPI_Comm_get_attr returns an attribute value by key.

#include <mpi.h>
int MPI_Comm_get_attr(MPI_Comm comm, int comm_keyval,
                      void *attribute_val, int *flag)
use mpi
CALL MPI_COMM_GET_ATTR(comm, comm_keyval, attribute_val,
                       flag, ierr)
INTEGER                        :: comm, comm_keyval, ierr
INTEGER(KIND=MPI_ADDRESS_KIND) :: attribute_val
LOGICAL                        :: flag
comm IN Communicator (handle).
comm_keyval IN Key value (integer).
attribute_val OUT Attribute value, unless flag=false
flag OUT False if no attribute is associated with the key (logical).
ierr OUT Return code (Fortran only).



MPI_Comm_get_info (C) MPI_COMM_GET_INFO (Fortran)

MPI_Comm_get_info returns a new info object containing the hints of the communicator.

#include <mpi.h>
int MPI_Comm_get_info(MPI_Comm comm, MPI_Info *info_used)
use mpi
CALL MPI_COMM_GET_INFO(comm, info_used, ierr)
INTEGER     :: comm, info_used, ierr
comm IN Communicator object (handle).
info_used OUT New info object (handle).
ierr OUT Return code (Fortran only).



MPI_Comm_get_name (C) MPI_COMM_GET_NAME (Fortran)

MPI_Comm_get_name returns the name associated with a communicator.

#include <mpi.h>
int MPI_Comm_get_name(MPI_Comm comm, char *comm_name,
                     int *resultlen)
use mpi
CALL MPI_COMM_GET_NAME(comm, comm_name, resultlen, ierr)
CHARACTER*(*) :: comm_name
INTEGER       :: comm, resultlen, ierr
comm IN Communicator whose name is to be returned (handle).
comm_name OUT The name associated with a communicator, or an empty string if no name exists (string).
resultlen OUT Length of returned name (integer).
ierr OUT Return code (Fortran only).



MPI_Comm_group (C) MPI_COMM_GROUP (Fortran)

MPI_Comm_group accesses the group associated with a communicator.

#include <mpi.h>
int MPI_Comm_group(MPI_Comm comm, MPI_Group *group)
use mpi
CALL MPI_COMM_GROUP(comm, group, ierr)
INTEGER     :: comm, group, ierr
comm IN Communicator (handle).
group OUT Group in communicator.
ierr OUT Return code (Fortran only).



MPI_Comm_idup (C) MPI_COMM_IDUP (Fortran)

MPI_Comm_idup is a nonblocking variant of MPI_COMM_DUP.

#include <mpi.h>
int MPI_Comm_idup(MPI_Comm comm, MPI_Comm *newcomm, MPI_Request *request)
use mpi
CALL MPI_COMM_IDUP(comm, newcomm, request, ierr)
INTEGER     :: comm, newcomm, request, ierr
comm IN Communicator (handle).
newcomm OUT Copy of comm (handle).
request OUT Communication request (handle).
ierr OUT Return code (Fortran only).



MPI_Comm_rank (C) MPI_COMM_RANK (Fortran)

MPI_Comm_rank determines the rank of the calling process in a communicator.

#include <mpi.h>
int MPI_Comm_rank(MPI_Comm comm, int *rank)
use mpi
CALL MPI_COMM_RANK(comm, rank, ierr)
INTEGER     :: comm, rank, ierr
comm IN Communicator (handle).
rank OUT Rank of the calling process in group of comm (integer).
ierr OUT Return code (Fortran only).



MPI_Comm_remote_group (C) MPI_COMM_REMOTE_GROUP (Fortran)

MPI_Comm_remote_group accesses the remote group associated with an inter-communicator.

#include <mpi.h>
int MPI_Comm_remote_group(MPI_Comm comm, MPI_Group *group)
use mpi
CALL MPI_COMM_REMOTE_GROUP(comm, group, ierr)
INTEGER     :: comm, group, ierr
comm IN Communicator (handle).
group OUT Remote group of communicator.
ierr OUT Return code (Fortran only).



MPI_Comm_remote_size (C) MPI_COMM_REMOTE_SIZE (Fortran)

MPI_Comm_remote_size determines the size of the remote group associated with an inter-communicator.

#include <mpi.h>
int MPI_Comm_remote_size(MPI_Comm comm, int *size)
use mpi
CALL MPI_COMM_REMOTE_SIZE(comm, size, ierr)
INTEGER     :: comm, size, ierr
comm IN Communicator (handle).
size OUT Number of processes in group of comm (integer).
ierr OUT Return code (Fortran only).



MPI_Comm_set_attr (C) MPI_COMM_SET_ATTR (Fortran)

MPI_Comm_set_attr stores an attribute value associated with a key.

#include <mpi.h>
int MPI_Comm_set_attr(MPI_Comm comm, int comm_keyval,
                      void *attribute_val)
use mpi
CALL MPI_COMM_SET_ATTR(comm, comm_keyval, attribute_val, ierr)
INTEGER                        :: comm, comm_keyval, ierr
INTEGER(KIND=MPI_ADDRESS_KIND) :: attribute_val
comm INOUT Communicator from which attribute is attached (handle).
comm_keyval IN Key value (integer).
attribute_val IN Attribute value.
ierr OUT Return code (Fortran only).



MPI_Comm_set_info (C) MPI_COMM_SET_INFO (Fortran)

MPI_Comm_set_info sets new values for the hints of the communicator.

#include <mpi.h>
int MPI_Comm_set_info(MPI_Comm comm, MPI_Info info)
use mpi
CALL MPI_COMM_SET_INFO(comm, info, ierr)
INTEGER     :: comm, info, ierr
comm INOUT Communicator (handle).
info IN Info object (handle).
ierr OUT Return code (Fortran only).



MPI_Comm_set_name (C) MPI_COMM_SET_NAME (Fortran)

MPI_Comm_set_name associates a name with a communicator.

#include <mpi.h>
int MPI_Comm_set_name(MPI_Comm comm, char *comm_name)
use mpi
CALL MPI_COMM_SET_NAME(comm, comm_name, ierr)
CHARACTER*(*) :: comm_name
INTEGER       :: comm, ierr
comm INOUT Communicator whose identifier is to be set (handle).
comm_name IN Name to associate with communicator (string).
ierr OUT Return code (Fortran only).



MPI_Comm_size (C) MPI_COMM_SIZE (Fortran)

MPI_Comm_size determines the size of the group associated with a communicator.

#include <mpi.h>
int MPI_Comm_size(MPI_Comm comm, int *size)
use mpi
CALL MPI_COMM_SIZE(comm, size, ierr)
INTEGER     :: comm, size, ierr
comm IN Communicator (handle).
size OUT Number of processes in the group of comm (integer).
ierr OUT Return code (Fortran only).



MPI_Comm_split (C) MPI_COMM_SPLIT (Fortran)

MPI_Comm_split creates new communicators based on colors and keys.

#include <mpi.h>
int MPI_Comm_split(MPI_Comm comm_in, int color, int key,
                   MPI_Comm *comm_out)
use mpi
CALL MPI_COMM_SPLIT(comm_in, color, key, comm_out, ierr)
INTEGER     :: comm_in, color, key, comm_out, ierr
comm_in IN Original communicator (handle).
color IN Control of subset assignment (integer).
key IN Control of rank assignment (integer).
comm_out OUT New communicator (handle).
ierr OUT Return code (Fortran only).



MPI_Comm_split_type (C) MPI_COMM_SPLIT_TYPE (Fortran)

MPI_Comm_split_type partitions the group associated with comm into disjoint subgroups, based on the type specified by split_type.

#include <mpi.h>
int MPI_Comm_split_type(MPI_Comm comm, int split_type, int key, 
                        MPI_Info info, MPI_Comm *newcomm)
use mpi
CALL MPI_COMM_SPLIT_TYPE(comm, split_type, key, info, newcomm, ierr)
INTEGER     :: comm, color, key, info, newcomm, ierr
comm IN Communicator (handle).
split_type IN Type of processes to be grouped together (integer).
key IN Control of rank assignment (integer).
info IN Info argument (integer).
newcomm OUT New communicator (handle).
ierr OUT Return code (Fortran only).



MPI_Comm_test_inter (C) MPI_COMM_TEST_INTER (Fortran)

MPI_Comm_test_inter determines whether a communicator is an inter-communicator.

#include <mpi.h>
int MPI_Comm_test_inter(MPI_Comm comm, int *flag)
use mpi
CALL MPI_COMM_TEST_INTER(comm, flag, ierr)
LOGICAL     :: flag
INTEGER     :: comm, ierr
comm IN Communicator (handle).
flag OUT (logical)
ierr OUT Return code (Fortran only).



MPI_Group_compare (C) MPI_GROUP_COMPARE (Fortran)

MPI_Group_compare compares two groups.

#include <mpi.h>
int MPI_Group_compare(MPI_Group group1, MPI_Group group2,
                      int *result)
use mpi
CALL MPI_GROUP_COMPARE(group1, group2, result, ierr)
INTEGER     :: group1, group2, result, ierr
group1 IN First group (handle).
group2 IN Second group (handle).
result OUT Integer that is MPI_IDENT if the order and members of the two groups are the same, MPI_SIMILAR if only the members are the same, and MPI_UNEQUAL otherwise.
ierr OUT Return code (Fortran only).



MPI_Group_difference (C) MPI_GROUP_DIFFERENCE (Fortran)

MPI_Group_difference creates a group from the difference of two groups.

#include <mpi.h>
int MPI_Group_difference(MPI_Group group1, MPI_Group group2,
                         MPI_Group *group_out)
use mpi
CALL MPI_GROUP_DIFFERENCE(group1, group2, group_out, ierr)
INTEGER     :: group1, group2, group_out, ierr
group1 IN First group (handle).
group2 IN Second group (handle).
group_out OUT Difference group (handle).
ierr OUT Return code (Fortran only).



MPI_Group_excl (C) MPI_GROUP_EXCL (Fortran)

MPI_Group_excl creates a group by excluding ranks of processes from an existing group.

#include <mpi.h>
int MPI_Group_excl(MPI_Group group, int n, int *ranks,
                   MPI_Group *newgroup)
use mpi
CALL MPI_GROUP_EXCL(group, n, ranks, newgroup, ierr)
INTEGER     :: group, n, ranks(*), newgroup, ierr
group IN Group (handle).
n IN Number of elements in array ranks (integer).
ranks IN Ranks of processes in group to be excluded from newgroup (array of integers).
newgroup OUT New group derived from above, in the order defined by group (handle).
ierr OUT Return code (Fortran only).



MPI_Group_free (C) MPI_GROUP_FREE (Fortran)

MPI_Group_free frees a group.

#include <mpi.h>
int MPI_Group_free(MPI_Group *group)
use mpi
CALL MPI_GROUP_FREE(group, ierr)
INTEGER     :: group, ierr
group INOUT Group (handle).
ierr OUT Return code (Fortran only).



MPI_Group_incl (C) MPI_GROUP_INCL (Fortran)

MPI_Group_incl creates a group by including ranks of processes from an existing group.

#include <mpi.h>
int MPI_Group_incl(MPI_Group group, int n, int *ranks,
                   MPI_Group *newgroup)
use mpi
CALL MPI_GROUP_INCL(group, n, ranks, newgroup, ierr)
INTEGER     :: group, n, ranks(*), newgroup, ierr
group IN Group (handle).
n IN Number of elements in array ranks and size of newgroup (integer).
ranks IN Ranks of processes in group to be included in newgroup (array of integers).
newgroup OUT New group derived from above, in the order defined by ranks (handle).
ierr OUT Return code (Fortran only).



MPI_Group_intersection (C) MPI_GROUP_INTERSECTION (Fortran)

MPI_Group_intersection creates a group from the intersection of two groups.

#include <mpi.h>
int MPI_Group_intersection(MPI_Group group1, MPI_Group group2,
                           MPI_Group *group_out)
use mpi
CALL MPI_GROUP_INTERSECTION(group1, group2, group_out, ierr)
INTEGER     :: group1, group2, group_out, ierr
group1 IN First group (handle).
group2 IN Second group (handle).
group_out OUT Intersection group (handle).
ierr OUT Return code (Fortran only).



MPI_Group_range_excl (C) MPI_GROUP_RANGE_EXCL (Fortran)

MPI_Group_excl creates a group by excluding ranges of ranks from an existing group.

#include <mpi.h>
int MPI_Group_range_excl(MPI_Group group, int n, int ranges[][3],
                         MPI_Group *newgroup)
use mpi
CALL MPI_GROUP_RANGE_EXCL(group, n, ranges, newgroup, ierr)
INTEGER     :: group, n, ranges(3,*), newgroup, ierr
group IN Group (handle).
n IN Number of triplets in array ranges (integer).
ranges IN A one-dimensional array of integer triplets of the form (first rank, last rank, stride) specifying the ranks of processes in group to be excluded from newgroup.
newgroup OUT New group derived from above, in the order defined by group (handle).
ierr OUT Return code (Fortran only).



MPI_Group_range_incl (C) MPI_GROUP_RANGE_INCL (Fortran)

MPI_Group_range_incl creates a group by including ranges of ranks from an existing group.

#include <mpi.h>
int MPI_Group_range_incl(MPI_Group group, int n, int ranges[][3],
                         MPI_Group *newgroup)
use mpi
CALL MPI_GROUP_RANGE_INCL(group, n, ranges, newgroup, ierr)
INTEGER     :: group, n, ranges(3,*), newgroup, ierr
group IN Group (handle).
n IN Number of triplets in array ranges (integer).
ranges IN A one-dimensional array of integer triplets of the form (first rank, last rank, stride) specifying the ranks of processes in group to be included in newgroup.
newgroup OUT New group derived from above, in the order defined by ranges (handle).
ierr OUT Return code (Fortran only).



MPI_Group_rank (C) MPI_GROUP_RANK (Fortran)

MPI_Group_rank returns the rank of this process in a group.

#include <mpi.h>
int MPI_Group_rank(MPI_Group group, int *rank)
use mpi
CALL MPI_GROUP_RANK(group, rank, ierr)
INTEGER     :: group, rank, ierr
group IN Group (handle).
rank OUT Rank of the calling process in group, or MPI_UNDEFINED if the process is not a member of the group (integer).
ierr OUT Return code (Fortran only).



MPI_Group_size (C) MPI_GROUP_SIZE (Fortran)

MPI_Group_size returns the size of a group.

#include <mpi.h>
int MPI_Group_size(MPI_Group group, int *size)
use mpi
CALL MPI_GROUP_SIZE(group, size, ierr)
INTEGER     :: group, size, ierr
group IN Group (handle).
size OUT Number of processes in group (integer).
ierr OUT Return code (Fortran only).



MPI_Group_translate_ranks (C) MPI_GROUP_TRANSLATE_RANKS (Fortran)

MPI_Group_translate_ranks translates the ranks of processes of a group into those from another group.

#include <mpi.h>
int MPI_Group_translate_ranks(MPI_Group group1, int n,
                              int *ranks1, MPI_Group group2,
                              int *ranks2)
use mpi
CALL MPI_GROUP_TRANSLATE_RANKS(group1, n, ranks1, group2,
                               ranks2, ierr)
INTEGER     :: group1, n, ranks1(*), group2, ranks2(*), ierr
group1 IN First group (handle).
n IN Number of ranks in ranks1 and ranks2 arrays (integer).
ranks1 IN Array of zero or more valid ranks in group1.
group2 IN Second group (handle).
ranks2 OUT Array of corresponding ranks in group2, MPI_UNDEFINED if no correspondence exists.
ierr OUT Return code (Fortran only).



MPI_Group_union (C) MPI_GROUP_UNION (Fortran)

MPI_Group_union creates a group by combining two groups.

#include <mpi.h>
int MPI_Group_union(MPI_Group group1, MPI_Group group2,
                    MPI_Group *group_out)
use mpi
CALL MPI_GROUP_UNION(group1, group2, group_out, ierr)
INTEGER     :: group1, group2, group_out, ierr
group1 IN First group (handle).
group2 IN Second group (handle).
group_out OUT Union group (handle).
ierr OUT Return code (Fortran only).



MPI_Intercomm_create (C) MPI_INTERCOMM_CREATE (Fortran)

MPI_Intercomm_create creates an inter-communicator from two intra-communicators.

#include <mpi.h>
int MPI_Intercomm_create(MPI_Comm local_comm, int local_leader,
                         MPI_Comm peer_comm, int remote_leader,
                         int tag, MPI_Comm *comm_out)
use mpi
CALL MPI_INTERCOMM_CREATE(local_comm, local_leader, peer_comm,
                          remote_leader, tag, comm_out, ierr)
INTEGER     :: local_comm, local_leader, peer_comm,
               remote_leader, tag, comm_out, ierr
local_comm IN Local intra-communicator (handle).
local_leader IN Rank in local_comm of leader, usually 0 (integer).
peer_comm IN Remote communicator (handle).
remote_leader IN Rank in peer_comm of leader, usually 0 (integer).
tag IN Message tag to use for inter-communicator. If multiple MPI_Intercomm_create calls are made, ensure that each local and remote leader uses a different tag (integer).
comm_out OUT New inter-communicator (handle).
ierr OUT Return code (Fortran only).



MPI_Intercomm_merge (C) MPI_INTERCOMM_MERGE (Fortran)

MPI_Intercomm_merge creates an intra-communicator from an inter-communicator.

#include <mpi.h>
int MPI_Intercomm_merge(MPI_Comm comm, int high, MPI_Comm *comm_out)
use mpi
CALL MPI_INTERCOMM_MERGE(comm, high, comm_out, ierr)
LOGICAL     :: high
INTEGER     :: comm, comm_out, ierr
comm IN Inter-Communicator (handle).
high IN Orders groups of intra-communicators within comm.
comm_out OUT New intra-communicator (handle).
ierr OUT Return code (Fortran only).



MPI_Type_create_keyval (C) MPI_TYPE_CREATE_KEYVAL (Fortran)

MPI_Type_create_keyval creates a new attribute key.

#include <mpi.h>
int MPI_Type_create_keyval(MPI_Type_copy_attr_function *type_copy_attr_fn,
                           MPI_Type_delete_attr_function *type_delete_attr_fn,
                           int *type_keyval, void *extra_state)
use mpi
CALL MPI_TYPE_CREATE_KEYVAL(type_copy_attr_fn, type_delete_attr_fn,
                            type_keyval, extra_state, ierr)
EXTERNAL                       :: type_copy_attr_fn, type_delete_attr_fn
INTEGER                        :: type_keyval, ierr
INTEGER(KIND=MPI_ADDRESS_KIND) :: extra_state
type_copy_attr_fn IN Copy callback function for type_keyval (function).
type_delete_attr_fn IN Delete callback function for type_keyval (function).
type_keyval OUT Key value for future access (integer).
extra_state IN Extra state for callback functions.
ierr OUT Return code (Fortran only).



MPI_Type_delete_attr (C) MPI_TYPE_DELETE_ATTR (Fortran)

MPI_Type_delete_attr deletes an attribute value of a datatype associated with a key.

#include <mpi.h>
int MPI_Type_delete_attr(MPI_Datatype type, int type_keyval)
use mpi
CALL MPI_TYPE_DELETE_ATTR(type, type_keyval, ierr)
INTEGER     :: type, type_keyval, ierr
type INOUT Datatype from which to delete attribute (handle).
type_keyval IN Key value (integer).
ierr OUT Return code (Fortran only).



MPI_Type_free_keyval (C) MPI_TYPE_FREE_KEYVAL (Fortran)

MPI_Type_free_keyval frees an attribute key for a datatype cache attribute.

#include <mpi.h>
int MPI_Type_free_keyval(int *type_keyval)
use mpi
CALL MPI_TYPE_FREE_KEYVAL(type_keyval, ierr)
INTEGER     :: type_keyval, ierr
type_keyval INOUT Key value (integer).
ierr OUT Return code (Fortran only).



MPI_Type_get_attr (C) MPI_TYPE_GET_ATTR (Fortran)

MPI_Type_get_attr returns an attribute value by key.

#include <mpi.h>
int MPI_Type_get_attr(MPI_Datatype type, int type_keyval,
                      void *attribute_val, int *flag)
use mpi
CALL MPI_TYPE_GET_ATTR(type, type_keyval, attribute_val, flag, ierr)
LOGICAL                        :: flag
INTEGER                        :: type, type_keyval, ierr
INTEGER(KIND=MPI_ADDRESS_KIND) :: attribute_val
type IN Datatype to which attribute is attached (handle).
type_keyval IN Key value (integer).
attribute_val OUT Attribute value unless flag=false.
flag OUT False if no attribute is associated with key (logical).
ierr OUT Return code (Fortran only).



MPI_Type_get_name (C) MPI_TYPE_GET_NAME (Fortran)

MPI_Type_get_name returns the name associated with a datatype.

#include <mpi.h>
int MPI_Type_get_name(MPI_Datatype type, char *type_name,
                      int *resultlen)
use mpi
CALL MPI_TYPE_GET_NAME(type, type_name, resultlen, ierr)
CHARACTER*(*) :: type_name
INTEGER       :: type, resultlen, ierr
type IN Datatype (handle).
type_name OUT Name associated with datatype or empty string if no name exists (string).
resultlen OUT Length of returned name (integer).
ierr OUT Return code (Fortran only).



MPI_Type_set_attr (C) MPI_TYPE_SET_ATTR (Fortran)

MPI_Type_set_attr stores an attribute value associated with a key for a datatype.

#include <mpi.h>
int MPI_Type_set_attr(MPI_Datatype type, int type_keyval,
                      void *attribute_val)
use mpi
CALL MPI_TYPE_SET_ATTR(type, type_keyval, attribute_val, ierr)
INTEGER                        :: type, type_keyval, ierr
INTEGER(KIND=MPI_ADDRESS_KIND) :: attribute_val
type INOUT Datatype to which attribute is attached (handle).
type_keyval IN Key value (integer).
attribute_val IN Attribute value.
ierr OUT Return code (Fortran only).



MPI_Type_set_name (C) MPI_TYPE_SET_NAME (Fortran)

MPI_Type_set_name associates a name with a datatype.

#include <mpi.h>
int MPI_Type_set_name(MPI_Datatype type, char *type_name)
use mpi
CALL MPI_TYPE_SET_NAME(type, type_name, ierr)
CHARACTER*(*) :: type_name
INTEGER       :: type, ierr
type INOUT Datatype to be named (handle).
type_name IN Name (string).
ierr OUT Return code (Fortran only).



MPI_Win_create_keyval (C) MPI_WIN_CREATE_KEYVAL (Fortran)

MPI_Win_create_keyval creates a new attribute key.

#include <mpi.h>
int MPI_Win_create_keyval(MPI_Win_copy_attr_function *win_copy_attr_fn,
                          MPI_Win_delete_attr_function *win_delete_attr_fn,
                          int *win_keyval, void *extra_state)
use mpi
CALL MPI_WIN_CREATE_KEYVAL(win_copy_attr_fn, win_delete_attr_fn,
                           win_keyval, extra_state, ierr)
EXTERNAL                       :: win_copy_attr_fn, win_delete_attr_fn,
INTEGER                        :: win_keyval, ierr
INTEGER(KIND=MPI_ADDRESS_KIND) :: extra_state
win_copy_attr_fn IN Copy callback function for win_keyval (function).
win_delete_attr_fn IN Delete callback function for win_keyval (function).
win_keyval OUT Key value for future access (integer).
extra_state IN Extra state for callback functions.
ierr OUT Return code (Fortran only).



MPI_Win_delete_attr (C) MPI_WIN_DELETE_ATTR (Fortran)

MPI_Win_delete_attr deletes a window attribute.

#include <mpi.h>
int MPI_Win_delete_attr(MPI_Win win, int win_keyval)
use mpi
CALL MPI_WIN_DELETE_ATTR(win, win_keyval, ierr)
INTEGER     :: win, win_keyval, ierr
win INOUT Window from which to delete attribute (handle).
win_keyval IN Key value (integer).
ierr OUT Return code (Fortran only).



MPI_Win_free_keyval (C) MPI_WIN_FREE_KEYVAL (Fortran)

MPI_Win_free_keyval frees a window attribute key value.

#include <mpi.h>
int MPI_Win_free_keyval(int *win_keyval)
use mpi
CALL MPI_WIN_FREE_KEYVAL(win_keyval, ierr)
INTEGER     :: win_keyval, ierr
win_keyval INOUT Key value (integer).
ierr OUT Return code (Fortran only).



MPI_Win_get_attr (C) MPI_WIN_GET_ATTR (Fortran)

MPI_Win_get_attr returns an attribute value by key.

#include <mpi.h>
int MPI_Win_get_attr(MPI_Win win, int win_keyval,
                     void *attribute_val, int *flag)
use mpi
CALL MPI_WIN_GET_ATTR(win, win_keyval, attribute_val, flag, ierr)
INTEGER                        :: win, win_keyval, ierr
INTEGER(KIND=MPI_ADDRESS_KIND) :: attribute_val
LOGICAL                        :: flag
win IN Window to which attribute is attached (handle).
win_keyval IN Key value (integer).
attribute_val OUT Attribute value unless flag=false.
flag OUT False if no attribute is associated with key (logical).
ierr OUT Return code (Fortran only).



MPI_Win_get_name (C) MPI_WIN_GET_NAME (Fortran)

MPI_Win_get_name returns the name associated with a window.

#include <mpi.h>
int MPI_Win_get_name(MPI_Win win, char *win_name, int *resultlen)
use mpi
CALL MPI_WIN_GET_NAME(win, win_name, resultlen, ierr)
CHARACTER*(*) :: win_name
INTEGER       :: win, resultlen, ierr
win IN Window object (handle)
win_name OUT Name associated with window or empty string if no name exists (string).
resultlen OUT Length of returned name (integer).
ierr OUT Return code (Fortran only).



MPI_Win_set_attr (C) MPI_WIN_SET_ATTR (Fortran)

MPI_Win_set_attr associates an attribute value with a window.

#include <mpi.h>
int MPI_Win_set_attr(MPI_Win win, int win_keyval,
                     void *attribute_val)
use mpi
CALL MPI_WIN_SET_ATTR(win, win_keyval, attribute_val, ierr)
INTEGER                        :: win, win_keyval, ierr
INTEGER(KIND=MPI_ADDRESS_KIND) :: attribute_val
win INOUT Window with which attribute is associated (handle).
win_keyval IN Key value (integer).
attribute_val IN Attribute value.
ierr OUT Return code (Fortran only).



MPI_Win_set_name (C) MPI_WIN_SET_NAME (Fortran)

MPI_Win_set_name associates a name with a window.

#include <mpi.h>
int MPI_Win_set_name(MPI_Win win, char *win_name)
use mpi
CALL MPI_WIN_SET_NAME(win, win_name, ierr)
CHARACTER*(*) :: win_name
INTEGER       :: win, ierr
win INOUT Window to be named (handle).
win_name IN Name (string).
ierr OUT Return code (Fortran only).


4.5   Process Topologies

MPI_Cart_coords/MPI_CART_COORDS Converts a rank to a coordinate position
MPI_Cart_create/MPI_CART_CREATE Generates a communicator having Cartesian topology information
MPI_Cart_get/MPI_CART_GET Gets Cartesian topology information
MPI_Cart_map/MPI_CART_MAP Calculates program's own rank in Cartesian topology (low-level topology function)
MPI_Cart_rank/MPI_CART_RANK Converts a coordinate position to a rank value
MPI_Cart_shift/MPI_CART_SHIFT Determines ranks of the data receiving process and the data spending process when shifting data along a coordinate axis
MPI_Cart_sub/MPI_CART_SUB Splits a Cartesian structure
MPI_Cartdim_get/MPI_CARTDIM_GET Gets Cartesian topology information
MPI_Dims_create/MPI_DIMS_CREATE Selects a Cartesian topology for adequate balance between dimension sizes
MPI_Dist_graph_create/MPI_DIST_GRAPH_CREATE Create a communicator to which the distributed graph topology information is attached
MPI_Dist_graph_crate_adjacent/MPI_DIST_GRAPH_CREATE_ADJACENT Create a communicator to which the distributed graph topology information is attached
MPI_Dist_graph_neighbors/MPI_DIST_GRAPH_NEIGHBORS Provides adjacency information for a distributed graph topology
MPI_Dist_graph_neighbors_count/MPI_DIST_GRAPH_NEIGHBORS_COUNT Provides adjacency information for a distributed graph topology
MPI_Graph_create/MPI_GRAPH_CREATE Creates a communicator having graph topology information
MPI_Graph_get/MPI_GRAPH_GET Gets graph topology information
MPI_Graph_map/MPI_GRAPH_MAP Calculates program's own rank in graph topology (low-level topology function)
MPI_Graph_neighbors/MPI_GRAPH_NEIGHBORS Gets process information about neighbors on graphs
MPI_Graph_neighbors_count/MPI_GRAPH_NEIGHBORS_COUNT Numbers of neighbors' processes on graph
MPI_Graphdims_get/MPI_GRAPHDIMS_GET Gets graph topology information
MPI_Ineighbor_allgagher/MPI_INEIGHBOR_ALLGATHER Gathers and scatters data (Nonblocking neighborhood communication version)
MPI_Ineighbor_allgagherv/MPI_INEIGHBOR_ALLGATHERV Gathers and scatters data (Nonblocking neighborhood communication version)
MPI_Ineighbor_alltoall/MPI_INEIGHBOR_ALLTOALL Gathers and scatters data (Nonblocking neighborhood communication version)
MPI_Ineighbor_alltoallv/MPI_INEIGHBOR_ALLTOALLV Gathers and scatters data (Nonblocking neighborhood communication version)
MPI_Ineighbor_alltoallw/MPI_INEIGHBOR_ALLTOALLW Gathers and scatters data (Nonblocking neighborhood communication version)
MPI_Neighbor_allgagher/MPI_NEIGHBOR_ALLGATHER Gathers and scatters data (Neighborhood communication version)
MPI_Neighbor_allgagherv/MPI_NEIGHBOR_ALLGATHERV Gathers and scatters data (Neighborhood communication version)
MPI_Neighbor_alltoall/MPI_NEIGHBOR_ALLTOALL Gathers and scatters data (Neighborhood communication version)
MPI_Neighbor_alltoallv/MPI_NEIGHBOR_ALLTOALLV Gathers and scatters data (Neighborhood communication version)
MPI_Neighbor_alltoallw/MPI_NEIGHBOR_ALLTOALLW Gathers and scatters data (Neighborhood communication version)
MPI_Topo_test/MPI_TOPO_TEST Inquiry about topology



MPI_Cart_coords (C) MPI_CART_COORDS (Fortran)

MPI_Cart_coords determines the coordinates of a process in Cartesian topology given its rank in a group.

#include <mpi.h>
int MPI_Cart_coords(MPI_Comm comm, int rank, int maxdims,
                    int *coords)
use mpi
CALL MPI_CART_COORDS(comm, rank, maxdims, coords, ierr)
INTEGER     :: comm, rank, maxdims, coords, ierr
comm IN Communicator with Cartesian structure (handle).
rank IN Rank of a process within group of comm (integer).
maxdims IN Length of vector coords in the calling program (integer).
coords OUT Integer array of size ndims containing the Cartesian coordinates of specified process (integer).
ierr OUT Return code (Fortran only).



MPI_Cart_create (C) MPI_CART_CREATE (Fortran)

MPI_Cart_create creates a new communicator to which topology information is attached.

#include <mpi.h>
int MPI_Cart_create(MPI_Comm comm_old, int ndims, int *dims,
                    int *periods, int reorder,
                    MPI_Comm *comm_cart)
use mpi
CALL MPI_CART_CREATE(comm_old, ndims, dims, periods, reorder, comm_cart, ierr)
LOGICAL     :: periods(*), reorder
INTEGER     :: comm_old, ndims, dims(*), comm_cart, ierr
comm_old IN Input communicator (handle).
ndims IN Number of dimensions of Cartesian grid (integer).
dims IN Integer array of size ndims specifying the number of processes in each dimension.
periods IN Logical array of size ndims specifying whether the grid is periodic (true) or not (false) in each dimension
reorder IN Ranking may be reordered (true) or not (false) (logical).
comm_cart OUT Communicator with new Cartesian topology (handle).
ierr OUT Return code (Fortran only).



MPI_Cart_get (C) MPI_CART_GET (Fortran)

MPI_Cart_get returns Cartesian topology information associated with a communicator.

#include <mpi.h>
int MPI_Cart_get(MPI_Comm comm, int maxdims, int *dims,
                 int *periods, int *coords)
use mpi
CALL MPI_CART_GET(comm, maxdims, dims, periods, coords, ierr)
LOGICAL     :: periods(*)
INTEGER     :: comm, maxdims, dims(*), coords(*), ierr
comm IN Communicator with Cartesian structure (handle).
maxdims IN Length of vectors dims, periods, and coords in the calling program (integer).
dims OUT Number of processes for each Cartesian dimension (array of integer).
periods OUT Periodicity (true/false) for each Cartesian dimension (array of logical).
coords OUT Coordinates of calling process in Cartesian structure (array of integer).
ierr OUT Return code (Fortran only).



MPI_Cart_map (C) MPI_CART_MAP (Fortran)

MPI_Cart_map maps a process to Cartesian topology information.

#include <mpi.h>
int MPI_Cart_map(MPI_Comm comm_old, int ndims, int *dims,
                 int *periods, int *newrank)
use mpi
CALL MPI_CART_MAP(comm_old, ndims, dims, periods, newrank, ierr)
LOGICAL     :: periods(*)
INTEGER     :: comm_old, ndims, dims(*), newrank, ierr
comm_old IN Input communicator (handle).
ndims IN Number of dimensions of Cartesian structure (integer).
dims IN Integer array of size ndims specifying the number of processes in each coordinate direction.
periods IN Logical array of size ndims specifying the periodicity specification in each coordinate direction.
newrank OUT Reordered rank of the calling process; MPI_UNDEFINED if calling process does not belong to grid (integer).
ierr OUT Return code (Fortran only).



MPI_Cart_rank (C) MPI_CART_RANK (Fortran)

MPI_Cart_rank determines a process's rank in a communicator given its Cartesian location.

#include <mpi.h>
int MPI_Cart_rank(MPI_Comm comm, int *coords, int *rank)
use mpi
CALL MPI_CART_RANK(comm, coords, rank, ierr)
INTEGER     :: comm, coords, rank, ierr
comm IN Communicator with Cartesian structure (handle).
coords IN Integer array of size ndims specifying the Cartesian coordinates of a process.
rank OUT Rank of specified process (integer).
ierr OUT Return code (Fortran only).



MPI_Cart_shift (C) MPI_CART_SHIFT (Fortran)

MPI_Cart_shift returns the shifted source and destination ranks given a shift direction and amount.

#include <mpi.h>
int MPI_Cart_shift(MPI_Comm comm, int direction, int displ,
                   int *source, int *dest)
use mpi
CALL MPI_CART_SHIFT(comm, direction, displ, source, dest, ierr)
INTEGER     :: comm, direction, displ, source, dest, ierr
comm IN Communicator with Cartesian structure (handle).
direction IN Coordinate dimension of shift (integer).
displ IN Displacement (>0 indicates an upwards shift; <0 indicates a downwards shift) (integer).
source OUT Rank of source process (integer).
dest OUT Rank of destination process (integer).
ierr OUT Return code (Fortran only).



MPI_Cart_sub (C) MPI_CART_SUB (Fortran)

MPI_Cart_sub partitions a communicator into subgroups that form lower-dimensional Cartesian subgrids.

#include <mpi.h>
int MPI_Cart_sub(MPI_Comm comm, int *remain_dims,
                 MPI_Comm *comm_new)
use mpi
CALL MPI_CART_SUB(comm, remain_dims, comm_new, ierr)
LOGICAL     :: remain_dims(*)
INTEGER     :: comm, comm_new, ierr
comm IN Communicator with Cartesian structure (handle).
remain_dims IN The i-th entry of remain_dims specifies whether the i-th dimension is kept in the subgrid (true) or is dropped (false) (logical vector).
comm_new OUT Communicator containing the subgrid that includes the calling process (handle).
ierr OUT Return code (Fortran only).



MPI_Cartdim_get (C) MPI_CARTDIM_GET (Fortran)

MPI_Cartdim_get returns Cartesian topology information associated with a communicator.

#include <mpi.h>
int MPI_Cartdim_get(MPI_Comm comm, int *ndims)
use mpi
CALL MPI_CARTDIM_GET(comm, ndims, ierr)
INTEGER     :: comm, ndims, ierr
comm IN Communicator with Cartesian structure (handle).
ndims OUT Number of dimensions of the Cartesian structure (integer).
ierr OUT Return code (Fortran only).



MPI_Dims_create (C) MPI_DIMS_CREATE (Fortran)

MPI_Dims_create creates a division of processors in a Cartesian grid.

#include <mpi.h>
int MPI_Dims_create(int nnodes, int ndims, int *dims)
use mpi
CALL MPI_DIMS_CREATE(nnodes, ndims, dims, ierr)
INTEGER     :: nnodes, ndims, dims(*), ierr
nnodes IN Number of nodes in a grid (integer).
ndims IN Number of Cartesian dimensions (integer).
dims INOUT Integer array of size ndims specifying the number of nodes in each dimension.
ierr OUT Return code (Fortran only).



MPI_Dist_graph_create (C) MPI_DIST_GRAPH_CREATE (Fortran)

MPI_Dist_graph_create create a communicator to which the distributed graph topology information is attached.

#include <mpi.h>
int MPI_Dist_graph_create(MPI_Comm comm_old, int n, const int sources[], const int degrees[], 
                          const int destinations[], const int weights[], MPI_Info info, 
                          int reorder, MPI_Comm *comm_dist_graph)
use mpi
CALL MPI_DIST_GRAPH_CREATE(comm_old, n, sources, degrees, destinations, weights, 
                           info, reorder, comm_dist_graph, ier)
INTEGER     :: comm_old, n, sources(*), degrees(*), destinations(*), weights(*), info, 
               comm_dist_graph, ierr
LOGICAL     :: reorder
comm_old IN Input communicator (handle).
n IN Number of source nodes for which this process specifies edges (non-negative integer).
source IN Array containing the n source nodes for which this process specifies edges (array of non-negative integers).
degrees IN Array specifying the number of destinations for each source node in the source node array (array of nonnegative integers).
destinations IN Destination nodes for the source nodes in the source node array (array of non-negative integers).
weights IN Weights for source to destination edges (array of nonnegativeintegers).
info IN Hints on optimization and interpretation of weights (handle).
reorder IN The process may be reordered (true) or not (false) (logical).
comm_dist_graph OUT Communicator with distributed graph topology added (handle).
ierr OUT Return code (Fortran only).



MPI_Dist_graph_create_adjacent (C) MPI_DIST_GRAPH_CREATE_ADJACENT (Fortran)

MPI_Dist_graph_create_adjacent create a communicator to which the distributed graph topology information is attached.

#include <mpi.h>
int MPI_Dist_graph_create_adjacent(MPI_Comm comm_old, int indegree, const int sources[], 
                                   const int sourceweights[], cont int outdegree, 
                                   const int destinations[], const int destweights[], 
                                   MPI_Info info, int reorder, MPI_Comm *comm_dist_graph)
use mpi
CALL MPI_DIST_GRAPH_CREATE_ADJACENT(comm_old, indegree, sources, sourceweights, outdegree, 
                                    destinations, destweights, info, reorder, comm_dist_graph, 
                                    ierr)
INTEGER     :: comm_old, indegree, sources(*), sourceweights(*), outdegree, destinations(*), 
               destweights(*), info, comm_dist_graph, ierr
LOGICAL     :: reorder
comm_old IN Input communicator (handle).
indegree IN Size of sources and sourceweights arrays (non-negativeinteger).
source IN Ranks of processes for which the calling process is a destination (array of non-negative integers).
sourceweights IN Weights of the edges into the calling process (array of non-negative integers).
outdegree IN Size of destinations and destweights arrays (non-negative integer).
destinations IN Ranks of processes for which the calling process is a source (array of non-negative integers).
destweights IN Weights of the edges out of the calling process (array of non-negative integers).
info IN Hints on optimization and interpretation of weights (handle).
reorder IN The rank may be reordered (true) or not (false) (logical).
comm_dist_graph OUT Communicator with distributed graph topology (handle).
ierr OUT Return code (Fortran only).



MPI_Dist_graph_neighbors (C) MPI_DIST_GRAPH_NEIGHBORS (Fortran)

MPI_Dist_graph_neighbors provides adjacency information for a distributed graph topology.

#include <mpi.h>
int MPI_Dist_graph_neighbors(MPI_Comm comm, int maxindegree, int sources[], int sourceweights[], 
                             int maxoutdegree, int destinations[], int destweights[])
use mpi
CALL MPI_DIST_GRAPH_NEIGHBORS(comm, maxindegree, sources, sourceweights, 
                              maxoutdegree, destinations, destweights, ierr)
INTEGER     :: comm, maxindegree, sources(*), sourceweights(*), 
               maxoutdegree, destinations(*), destweights(*), ierr
comm IN Communicator with distributed graph topology (handle).
maxindegree IN Size of sources and sourceweights arrays (non-negative integer).
sources OUT Processes for which the calling process is a destination (array of non-negative integers).
sourceweights OUT Weights of the edges into the calling process (array of non-negative integers).
maxoutdegree IN Size of destinations and destweights arrays (non-negative integer).
destinations OUT Processes for which the calling process is a source (array of non-negative integers).
destweights OUT Weights of the edges out of the calling process (array of non-negative integers).
ierr OUT Return code (Fortran only).



MPI_Dist_graph_neighbors_count (C) MPI_DIST_GRAPH_NEIGHBORS_COUNT (Fortran)

MPI_Dist_graph_neighbors_count provides adjacency information for a distributed graph topology.

#include <mpi.h>
int MPI_Dist_graph_neighbors_count(MPI_Comm comm, int *indegree, int *outdegree, int *weighted)
use mpi
CALL MPI_DIST_GRAPH_NEIGHBORS_COUNT(comm, indegree, outdegree, weighted, ierr)
INTEGER     :: comm, indegree, outdegree, ierr
LOGICAL     :: weighted
comm IN Communicator with distributed graph topology (handle).
indegree OUT Number of edges into this process (non-negative integer).
outdegree OUT Number of edges out of this process (non-negative integer).
weighted OUT False if MPI_UNWEIGHTED was supplied during creation, true otherwise (logical)
ierr OUT Return code (Fortran only).



MPI_Graph_create (C) MPI_GRAPH_CREATE (Fortran)

MPI_Graph_create creates a new communicator and attaches graph topology information to it.

#include <mpi.h>
int MPI_Graph_create(MPI_Comm comm_old, int nnodes, int *index,
                     int *edges, int reorder, MPI_Comm *comm_graph)
use mpi
CALL MPI_GRAPH_CREATE(comm_old, nnodes, index, edges, reorder, comm_graph, ierr)
  
LOGICAL     :: reorder
INTEGER     :: comm_old, nnodes, index(*), edges(*), comm_graph, ierr
comm_old IN Input communicator (handle).
nnodes IN Number of nodes in graph (integer).
index IN Array of integers describing node degrees.
edges IN Array of integers describing graph edges.
reorder IN Ranking may be reordered if true or not if false (logical).
comm_graph OUT Communicator with graph topology attached (handle).
ierr OUT Return code (Fortran only).



MPI_Graph_get (C) MPI_GRAPH_GET (Fortran)

MPI_Graph_get returns the graph topology information associated with a communicator.

#include <mpi.h>
int MPI_Graph_get(MPI_Comm comm, int maxindex, int maxedges,
                  int *index, int *edges)
use mpi
CALL MPI_GRAPH_GET(comm, maxindex, maxedges, index, edges, ierr)
INTEGER     :: comm, maxindex, maxedges, index, edges, ierr
comm IN Communicator with graph topology (handle).
maxindex IN Length of vector index in the calling program (integer).
maxedges IN Length of vector edges in the calling program (integer).
index OUT Array of integers containing graph topology (see MPI_GRAPH_CREATE).
edges OUT Array of integers containing graph topology (see MPI_GRAPH_CREATE).
ierr OUT Return code (Fortran only).



MPI_Graph_map (C) MPI_GRAPH_MAP (Fortran)

MPI_Graph_map maps a process to a graph topology.

#include <mpi.h>
int MPI_Graph_map(MPI_Comm comm_old, int nnodes, int *index,
                  int *edges, int *newrank)
use mpi
CALL MPI_GRAPH_MAP(comm_old, nnodes, index, edges, newrank, ierr)
INTEGER     :: comm_old, nnodes, index(*), edges(*), newrank, ierr
comm_old IN Input communicator (handle).
nnodes IN Number of graph nodes (integer).
index IN Array of integers containing graph topology (see MPI_GRAPH_CREATE).
edges IN Array of integers containing graph topology (see MPI_GRAPH_CREATE).
newrank OUT Reordered rank of the calling process, MPI_UNDEFINED if the calling process does not belong to graph topology (integer).
ierr OUT Return code (Fortran only).



MPI_Graph_neighbors (C) MPI_GRAPH_NEIGHBORS (Fortran)

MPI_Graph_neighbors returns the neighbors of a node associated with a graph topology.

#include <mpi.h>
int MPI_Graph_neighbors(MPI_Comm comm, int rank,
                        int maxneighbors, int *neighbors)
use mpi
CALL MPI_GRAPH_NEIGHBORS(comm, rank, maxneighbors, neighbors,
                         ierr)
INTEGER     :: comm, rank, maxneighbors, neighbors, ierr
comm IN Communicator with graph topology (handle).
rank IN Rank of process in group of comm (integer).
maxneighbors IN Size of array neighbors (integer).
neighbors OUT Ranks of processes that are neighbors to specified process (array of integers).
ierr OUT Return code (Fortran only).



MPI_Graph_neighbors_count (C) MPI_GRAPH_NEIGHBORS_COUNT (Fortran)

MPI_Graph_neighbors_count returns the number of neighbors of a node associated with a graph topology.

#include <mpi.h>
int MPI_Graph_neighbors_count(MPI_Comm comm, int rank,
                              int *nneighbors)
use mpi
CALL MPI_GRAPH_NEIGHBORS_COUNT(comm, rank, nneighbors, ierr)
INTEGER     :: comm, rank, nneighbors, ierr
comm IN Communicator with graph topology (handle).
rank IN Rank of process in group of comm (integer).
nneighbors OUT Number of neighbors of specified process (integer).
ierr OUT Return code (Fortran only).



MPI_Graphdims_get (C) MPI_GRAPHDIMS_GET (Fortran)

MPI_Graphdims_get returns the graph topology information associated with a communicator.

#include <mpi.h>
int MPI_Graphdims_get(MPI_Comm comm, int *nnodes, int *nedges)
use mpi
CALL MPI_GRAPHDIMS_GET(comm, nnodes, nedges, ierr)
INTEGER     :: comm, nnodes, nedges, ierr
comm IN Communicator with graph topology (handle).
nnodes OUT Number of nodes in graph topology (integer).
nedges OUT Number of edges in graph topology (integer).
ierr OUT Return code (Fortran only).



MPI_Ineighbor_allgather (C) MPI_INEIGHBOR_ALLGATHER (Fortran)

MPI_Ineighbor_allgather gathers data from all tasks and distributes it to all (Nonblocking neighborhood communication version).

#include <mpi.h>
int MPI_Ineighbor_allgather(void *sendbuf, int sendcount, MPI_Datatype sendtype, 
                            void *recvbuf, int recvcount, MPI_Datatype recvtype,
                            MPI_Comm comm, MPI_Request *request)
use mpi
CALL MPI_INEIGHBOR_ALLGATHER(sendbuf, sendcount, sendtype, 
                             recvbuf, recvcount, recvtype, 
                             comm, request, ierr)
<arbitrary> :: sendbuf(*), recvbuf(*)
INTEGER     :: sendcount, sendtype, recvcount, recvtype,
               comm, request, ierr
sendbuf IN Initial address of send buffer (choice).
sendcount IN Number of elements in send buffer (integer).
sendtype IN Datatype of send buffer elements (handle).
recvbuf OUT Address of receive buffer (choice).
recvcount IN Number of elements received from any process (integer).
recvtype IN Datatype of receive buffer elements (handle).
comm IN Communicator (handle).
request OUT Communication request (handle).
ierr OUT Return code (Fortran only).



MPI_Ineighbor_allgatherv (C) MPI_INEIGHBOR_ALLGATHERV (Fortran)

MPI_Ineighbor_allgatherv gathers data from all tasks and distributes it to all (Nonblocking neighborhood communication version).

#include <mpi.h>
int MPI_Ineighbor_allgatherv(void *sendbuf, int sendcount, MPI_Datatype sendtype, 
                             void *recvbuf, int *recvcounts, int *displs, MPI_Datatype recvtype, 
                             MPI_Comm comm, MPI_Request *request)
use mpi
CALL MPI_INEIGHBOR_ALLGATHERV(sendbuf, sendcount, sendtype, 
                              recvbuf, recvcounts, displs, recvtype, 
                              comm, request, ierr)
<arbitrary> :: sendbuf(*), recvbuf(*)
INTEGER     :: sendcount, sendtype, recvcounts(*), displs(*), recvtype,
               comm, request, ierr
sendbuf IN Initial address of send buffer (choice).
sendcount IN Number of elements in send buffer (integer).
sendtype IN Datatype of send buffer elements (handle).
recvbuf OUT Address of receive buffer (choice).
recvcounts IN Integer array (of length group size) containing the number of elements that are received from each process.
displs IN Integer array (of length group size). Entry i specifies the displacement (relative to recvbuf) at which to place the incoming data from process i.
recvtype IN Datatype of receive buffer elements (handle).
comm IN Communicator (handle).
request OUT Communication request (handle).
ierr OUT Return code (Fortran only).



MPI_Ineighbor_alltoall (C) MPI_INEIGHBOR_ALLTOALL (Fortran)

MPI_Ineighbor_alltoall sends data from all processes to all (Nonblocking neighborhood commucation version).

#include <mpi.h>
int MPI_Ineighbor_alltoall(void *sendbuf, int sendcount, MPI_Datatype sendtype, 
                           void *recvbuf, int recvcnt, MPI_Datatype recvtype, 
                           MPI_Comm comm, MPI_Request *request)
use mpi
CALL MPI_INEIGHBOR_ALLTOALL(sendbuf, sendcount, sendtype, 
                            recvbuf, recvcnt, recvtype, 
                            comm, request, ierr)
<arbitrary> :: sendbuf(*), recvbuf(*)
INTEGER     :: sendcount, sendtype, recvcnt, recvtype, 
               comm, request, ierr
sendbuf IN Initial address of send buffer (choice).
sendcount IN Number of elements to send to each process (integer).
sendtype IN Datatype of send buffer elements (handle).
recvbuf OUT Initial address of receive buffer (choice).
recvcnt IN Number of elements received from any process (integer).
recvtype IN Datatype of receive buffer elements (handle).
comm IN Communicator (handle).
request OUT Communication request (handle).
ierr OUT Return code (Fortran only).



MPI_Ineighbor_alltoallv (C) MPI_INEIGHBOR_ALLTOALLV (Fortran)

MPI_Ineighbor_alltoallv sends data from all processes to all using a displacement (Nonblocking neighborhood communication version).

#include <mpi.h>
int MPI_Ineighbor_alltoallv(void *sendbuf, int *sendcnts, int *sdispls, MPI_Datatype sendtype, 
                            void *recvbuf, int *recvcnts, int *rdispls, MPI_Datatype recvtype,
                            MPI_Comm comm, MPI_Request *request)
use mpi
CALL MPI_INEIGHBOR_ALLTOALLV(sendbuf, sendcnts, sdispls, sendtype, 
                            recvbuf, recvcnts, rdispls, recvtype, 
                            comm, request, ierr)
<arbitrary> :: sendbuf(*), recvbuf(*)
INTEGER     :: sendcnts(*), sdispls(*), sendtype, recvcnts(*), rdispls(*),
               recvtype, comm, request, ierr
sendbuf IN Initial address of send buffer (choice).
sendcnts IN Integer array equal to the group size specifying the number of elements to send to each processor.
sdispls IN Integer array (of length group size). Entry j specifies the displacement (relative to sendbuf from which to take the outgoing data destined for process j).
sendtype IN Datatype of send buffer elements (handle).
recvbuf OUT Address of receive buffer (choice).
recvcnts IN Integer array equal to the group size specifying the maximum number of elements that can be received from each processor.
rdispls IN Integer array (of length group size). Entry i specifies the displacement (relative to recvbuf at which to place the incoming data from process i).
recvtype IN Datatype of receive buffer elements (handle).
comm IN Communicator (handle).
request OUT Communication request (handle).
ierr OUT Return code (Fortran only).



MPI_Ineighbor_alltoallw (C) MPI_INEIGHBOR_ALLTOALLW (Fortran)

MPI_Ineighbor_alltoallw sends data from all processes to all using a displacement and datatype (Nonblocking neighborhood communication version).

#include <mpi.h>
int MPI_Ineighbor_alltoallw(void *sendbuf, int *sendcounts, int *sdispls, MPI_Datatype *sendtypes, 
                            void *recvbuf, int *recvcounts, int *rdispls, MPI_Datatype *recvtypes, 
                            MPI_Comm comm, MPI_Request *request)
use mpi
CALL MPI_INEIGHBOR_ALLTOALLW(sendbuf, sendcounts, sdispls, sendtypes,
                            recvbuf, recvcounts, rdispls, recvtypes,
                            comm, request, ierr)
<arbitrary> :: sendbuf(*), recvbuf(*)
INTEGER     :: sendcounts(*), sdispls(*), sendtypes(*), recvcounts(*),
               rdispls(*), recvtypes(*), comm, request, ierr
sendbuf IN Initial address of send buffer (choice).
sendcounts IN Integer array equal to the group size specifying the number of elements to send to each processor (integer).
sdispls IN Integer array (of length group size). Entry j specifies the displacement (relative to sendbuf from which to take the outgoing data destined for proce ss j).
sendtypes IN Array of datatypes (of length group size). Entry j specifies the type of data to send to process j (handle).
recvbuf OUT Address of receive buffer (choice).
recvcounts IN Integer array equal to the group size specifying the maximum number of elements that can be received from each processor (integer).
rdispls IN Integer array (of length group size). Entry i specifies the displacement (relative to recvbuf at which to place the incoming data from process i).
recvtypes IN Array of datatypes (of length group size). Entry i specifies the type of data received from process i (handle).
comm IN Communicator (handle).
request OUT Communication request (handle).
ierr OUT Return code (Fortran only).



MPI_Neighbor_allgather (C) MPI_NEIGHBOR_ALLGATHER (Fortran)

MPI_Neighbor_allgather gathers data from all tasks and distributes it to all (Neighborhood communication version).

#include <mpi.h>
int MPI_Neighbor_allgather(void *sendbuf, int sendcount, MPI_Datatype sendtype, 
                           void *recvbuf, int recvcount, MPI_Datatype recvtype,
                           MPI_Comm comm)
use mpi
CALL MPI_NEIGHBOR_ALLGATHER(sendbuf, sendcount, sendtype, 
                            recvbuf, recvcount, recvtype, comm, ierr)
<arbitrary> :: sendbuf(*), recvbuf(*)
INTEGER     :: sendcount, sendtype, recvcount, recvtype, comm,
               ierr
sendbuf IN Initial address of send buffer (choice).
sendcount IN Number of elements in send buffer (integer).
sendtype IN Datatype of send buffer elements (handle).
recvbuf OUT Address of receive buffer (choice).
recvcount IN Number of elements received from any process (integer).
recvtype IN Datatype of receive buffer elements (handle).
comm IN Communicator (handle).
ierr OUT Return code (Fortran only).



MPI_Neighbor_allgatherv (C) MPI_NEIGHBOR_ALLGATHERV (Fortran)

MPI_Neighbor_allgatherv gathers data from all tasks and distributes it to all (Neighborhood communication version).

#include <mpi.h>
int MPI_Neighbor_allgatherv(void *sendbuf, int sendcount, MPI_Datatype sendtype, 
                            void *recvbuf, int *recvcounts, int *displs, 
                            MPI_Datatype recvtype, MPI_Comm comm)
use mpi
CALL MPI_NEIGHBOR_ALLGATHERV(sendbuf, sendcount, sendtype, 
                             recvbuf, recvcounts, displs, recvtype, 
                             comm, ierr)
<arbitrary> :: sendbuf(*), recvbuf(*)
INTEGER     :: sendcount, sendtype, recvcounts(*), displs(*), recvtype,
               comm, ierr
sendbuf IN Initial address of send buffer (choice).
sendcount IN Number of elements in send buffer (integer).
sendtype IN Datatype of send buffer elements (handle).
recvbuf OUT Address of receive buffer (choice).
recvcounts IN Integer array (of length group size) containing the number of elements that are received from each process.
displs IN Integer array (of length group size). Entry i specifies the displacement (relative to recvbuf) at which to place the incoming data from process i.
recvtype IN Datatype of receive buffer elements (handle).
comm IN Communicator (handle).
ierr OUT Return code (Fortran only).



MPI_Neighbor_alltoall (C) MPI_NEIGHBOR_ALLTOALL (Fortran)

MPI_Neighbor_alltoall sends data from all processes to all (Neighborhood commucation version).

#include <mpi.h>
int MPI_Neighbor_alltoall(void *sendbuf, int sendcount, MPI_Datatype sendtype, 
                          void *recvbuf, int recvcnt, MPI_Datatype recvtype, 
                          MPI_Comm comm)
use mpi
CALL MPI_NEIGHBOR_ALLTOALL(sendbuf, sendcount, sendtype, 
                           recvbuf, recvcnt, recvtype, comm, ierr)
<arbitrary> :: sendbuf(*), recvbuf(*)
INTEGER     :: sendcount, sendtype, recvcnt, recvtype, comm, ierr
sendbuf IN Initial address of send buffer (choice).
sendcount IN Number of elements to send to each process (integer).
sendtype IN Datatype of send buffer elements (handle).
recvbuf OUT Initial address of receive buffer (choice).
recvcnt IN Number of elements received from any process (integer).
recvtype IN Datatype of receive buffer elements (handle).
comm IN Communicator (handle).
ierr OUT Return code (Fortran only).



MPI_Neighbor_alltoallv (C) MPI_NEIGHBOR_ALLTOALLV (Fortran)

MPI_Neighbor_alltoallv sends data from all processes to all using a displacement (Neighborhood communication version).

#include <mpi.h>
int MPI_Neighbor_alltoallv(void *sendbuf, int *sendcnts, int *sdispls, MPI_Datatype sendtype, 
                           void *recvbuf, int *recvcnts, int *rdispls, MPI_Datatype recvtype,
                           MPI_Comm comm)
use mpi
CALL MPI_NEIGHBOR_ALLTOALLV(sendbuf, sendcnts, sdispls, sendtype, 
                            recvbuf, recvcnts, rdispls, recvtype, 
                            comm, ierr)
<arbitrary> :: sendbuf(*), recvbuf(*)
INTEGER     :: sendcnts(*), sdispls(*), sendtype, recvcnts(*), rdispls(*),
               recvtype, comm, ierr
sendbuf IN Initial address of send buffer (choice).
sendcnts IN Integer array equal to the group size specifying the number of elements to send to each processor.
sdispls IN Integer array (of length group size). Entry j specifies the displacement (relative to sendbuf from which to take the outgoing data destined for process j).
sendtype IN Datatype of send buffer elements (handle).
recvbuf OUT Address of receive buffer (choice).
recvcnts IN Integer array equal to the group size specifying the maximum number of elements that can be received from each processor.
rdispls IN Integer array (of length group size). Entry i specifies the displacement (relative to recvbuf at which to place the incoming data from process i).
recvtype IN Datatype of receive buffer elements (handle).
comm IN Communicator (handle).
ierr OUT Return code (Fortran only).



MPI_Neighbor_alltoallw (C) MPI_NEIGHBOR_ALLTOALLW (Fortran)

MPI_Neighbor_alltoallw sends data from all processes to all using a displacement and datatype (Neighborhood communication version).

#include <mpi.h>
int MPI_Neighbor_alltoallw(void *sendbuf, int *sendcounts, int *sdispls, MPI_Datatype *sendtypes, 
                           void *recvbuf, int *recvcounts, int *rdispls, MPI_Datatype *recvtypes, 
                           MPI_Comm comm)
use mpi
CALL MPI_NEIGHBOR_ALLTOALLW(sendbuf, sendcounts, sdispls, sendtypes,
                            recvbuf, recvcounts, rdispls, recvtypes,
                            comm, ierr)
<arbitrary> :: sendbuf(*), recvbuf(*)
INTEGER     :: sendcounts(*), sdispls(*), sendtypes(*), recvcounts(*),
               rdispls(*), recvtypes(*), comm, ierr
sendbuf IN Initial address of send buffer (choice).
sendcounts IN Integer array equal to the group size specifying the number of elements to send to each processor (integer).
sdispls IN Integer array (of length group size). Entry j specifies the displacement (relative to sendbuf from which to take the outgoing data destined for proce ss j).
sendtypes IN Array of datatypes (of length group size). Entry j specifies the type of data to send to process j (handle).
recvbuf OUT Address of receive buffer (choice).
recvcounts IN Integer array equal to the group size specifying the maximum number of elements that can be received from each processor (integer).
rdispls IN Integer array (of length group size). Entry i specifies the displacement (relative to recvbuf at which to place the incoming data from process i).
recvtypes IN Array of datatypes (of length group size). Entry i specifies the type of data received from process i (handle).
comm IN Communicator (handle).
ierr OUT Return code (Fortran only).



MPI_Topo_test (C) MPI_TOPO_TEST (Fortran)

MPI_Topo_test determines the type of topology (if any) associated with a communicator.

#include <mpi.h>
int MPI_Topo_test(MPI_Comm comm, int *top_type)
use mpi
CALL MPI_TOPO_TEST(comm, top_type, ierr)
INTEGER     :: comm, top_type, ierr
comm IN Communicator (handle).
top_type OUT Topology type of communicator (choice).
ierr OUT Return code (Fortran only).


4.6   MPI Environment Management

MPI_Abort/MPI_ABORT Aborts a process within a communicator
MPI_Add_error_class/MPI_ADD_ERROR_CLASS Creates a new error class and returns its value
MPI_Add_error_code/MPI_ADD_ERROR_CODE Creates a new error code associated with an error class and returns its value
MPI_Add_error_string/MPI_ADD_ERROR_STRING Creates a new error string associated with an error code or class
MPI_Alloc_mem/MPI_ALLOC_MEM Allocates memory
MPI_Comm_call_errhandler/MPI_COMM_CALL_ERRHANDLER Invokes the error handler associated with a communicator
MPI_Comm_create_errhandler/MPI_COMM_CREATE_ERRHANDLER Creates an error handler that can be attached to a communicator
MPI_Comm_get_errhandler/MPI_COMM_GET_ERRHANDLER Returns the error handler associated with a communicator
MPI_Comm_set_errhandler/MPI_COMM_SET_ERRHANDLER Attaches a new error handler to a communicator
MPI_Errhandler_free/MPI_ERRHANDLER_FREE Frees an error handler
MPI_Error_class/MPI_ERROR_CLASS Converts an error code to an error class
MPI_Error_string/MPI_ERROR_STRING Converts an error code to an error character string
MPI_File_call_errhandler/MPI_FILE_CALL_ERRHANDLER Invokes the error handler associated with a file with the error code supplied
MPI_File_create_errhandler/MPI_FILE_CREATE_ERRHANDLER Creates an error handler for a file
MPI_File_get_errhandler/MPI_FILE_GET_ERRHANDLER Returns the error handler associated with a file
MPI_File_set_errhandler/MPI_FILE_SET_ERRHANDLER Attaches a new error handler to a file
MPI_Finalize/MPI_FINALIZE Terminates uses of the MPI function
MPI_Finalized/MPI_FINALIZED Determines whether MPI_Finalize completes successfully
MPI_Free_mem/MPI_FREE_MEM Deallocates memory previously allocated by MPI_Alloc_mem
MPI_Get_library_version/MPI_GET_LIBRARY_VERSION Returns a string representing the version of the MPI library
MPI_Get_processor_name/MPI_GET_PROCESSOR_NAME Returns a processor name
MPI_Get_version/MPI_GET_VERSION Determines the MPI version number
MPI_Init/MPI_INIT Initializes the MPI environment
MPI_Initialized/MPI_INITIALIZED Inquiry about the MPI environment initialization status
MPI_Win_call_errhandler/MPI_WIN_CALL_ERRHANDLER Invokes the error handler associated with a window with the error code supplied
MPI_Win_create_errhandler/MPI_WIN_CREATE_ERRHANDLER Creates an error handler for a window
MPI_Win_get_errhandler/MPI_WIN_GET_ERRHANDLER Returns the error handler associated with a window
MPI_Win_set_errhandler/MPI_WIN_SET_ERRHANDLER Attaches a new error handler to a window
MPI_Wtick/MPI_WTICK Returns the resolution (in seconds) of MPI_WTIME in seconds
MPI_Wtime/MPI_WTIME Returns the elapsed time (in seconds) from a given time in the past



MPI_Abort (C) MPI_ABORT (Fortran)

MPI_Abort terminates the MPI execution environment.

#include <mpi.h>
int MPI_Abort(MPI_Comm comm, int errcode)
use mpi
CALL MPI_ABORT(comm, errcode, ierr)
INTEGER     :: comm, errcode, ierr
comm IN Communicator of tasks to abort.
errcode IN Error code to return to invoking environment.
ierr OUT Return code (Fortran only).



MPI_Add_error_class (C) MPI_ADD_ERROR_CLASS (Fortran)

MPI_Add_error_class creates a new error class and returns its value.

#include <mpi.h>
int MPI_Add_error_class(int *errorclass)
use mpi
CALL MPI_ADD_ERROR_CLASS(errorclass, ierr)
INTEGER     :: errorclass, ierr
errorclass OUT Value for new error class (integer).
ierr OUT Return code (Fortran only).



MPI_Add_error_code (C) MPI_ADD_ERROR_CODE (Fortran)

MPI_Add_error_class creates a new error code associated with an error class and returns its value.

#include <mpi.h>
int MPI_Add_error_code(int errorclass, int *errorcode)
use mpi
CALL MPI_ADD_ERROR_CODE(errorclass, errorcode, ierr)
INTEGER     :: errorclass, errorcode, ierr
errorclass IN Error class (integer).
errorcode OUT New error code associated with error class (integer).
ierr OUT Return code (Fortran only).



MPI_Add_error_string (C) MPI_ADD_ERROR_STRING (Fortran)

MPI_Add_error_string creates a new error string associated with an error code or class.

#include <mpi.h>
int MPI_Add_error_string(int errcode, char *string)
use mpi
CALL MPI_ADD_ERROR_STRING(errcode, string, ierr)
CHARACTER*(*) :: string
INTEGER       :: errcode, ierr
errcode IN Error code or class (integer).
string IN Text corresponding to error code (string).
ierr OUT Return code (Fortran only).



MPI_Alloc_mem (C) MPI_ALLOC_MEM (Fortran)

MPI_Alloc_mem allocates memory.

#include <mpi.h>
int MPI_Alloc_mem(MPI_Aint size, MPI_Info info, void *baseptr)
use mpi
CALL MPI_ALLOC_MEM(size, info, baseptr, ierr)
INTEGER                        :: info, ierr
INTEGER(KIND=MPI_ADDRESS_KIND) :: size, baseptr
size IN Size of memory segment in bytes (nonnegative integer).
info IN Info argument (handle).
baseptr OUT Pointer to beginning of memory segment allocated.
ierr OUT Return code (Fortran only).



MPI_Comm_call_errhandler (C) MPI_COMM_CALL_ERRHANDLER (Fortran)

MPI_Comm_call_errhandler invokes the error handler associated with a communicator.

#include <mpi.h>
int MPI_Comm_call_errhandler(MPI_Comm comm, int errorcode)
use mpi
CALL MPI_COMM_CALL_ERRHANDLER(comm, errorcode, ierr)
INTEGER     :: comm, errorcode, ierr
comm IN Communicator with error handler (handle).
errorcode IN Error code (integer).
ierr OUT Return code (Fortran only).



MPI_Comm_create_errhandler (C) MPI_COMM_CREATE_ERRHANDLER (Fortran)

MPI_Comm_create_errhandler creates an error handler that can be attached to a communicator.

#include <mpi.h>
int MPI_Comm_create_errhandler(MPI_Comm_errhandler_fn *function,
                               MPI_Errhandler *errhandler)
use mpi
CALL MPI_COMM_CREATE_ERRHANDLER(function, errhandler, ierr)
EXTERNAL    :: function
INTEGER     :: errhandler, ierr
function IN User-defined error handling procedure (function).
errhandler OUT MPI error handler (handle).
ierr OUT Return code (Fortran only).



MPI_Comm_get_errhandler (C) MPI_COMM_GET_ERRHANDLER (Fortran)

MPI_Comm_get_errhandler returns the error handler associated with a communicator.

#include <mpi.h>
int MPI_Comm_get_errhandler(MPI_Comm comm,
                            MPI_Errhandler *errhandler)
use mpi
CALL MPI_COMM_GET_ERRHANDLER(comm, errhandler, ierr)
INTEGER     :: comm, errhandler, ierr
comm IN Communicator (handle).
errhandler OUT Error handler associated with communicator (handle).
ierr OUT Return code (Fortran only).



MPI_Comm_set_errhandler (C) MPI_COMM_SET_ERRHANDLER (Fortran)

MPI_Comm_set_errhandler attaches a new error handler to a communicator.

#include <mpi.h>
int MPI_Comm_set_errhandler(MPI_Comm comm,
                            MPI_Errhandler errhandler)
use mpi
CALL MPI_COMM_SET_ERRHANDLER(comm, errhandler, ierr)
INTEGER     :: comm, errhandler, ierr
comm INOUT Communicator (handle).
errhandler IN New error handler for communicator (handle).
ierr OUT Return code (Fortran only).



MPI_Errhandler_free (C) MPI_ERRHANDLER_FREE (Fortran)

MPI_Errhandler_free frees an MPI error handler.

#include <mpi.h>
int MPI_Errhandler_free(MPI_Errhandler *errhandler)
use mpi
CALL MPI_ERRHANDLER_FREE(errhandler, ierr)
INTEGER     :: errhandler, ierr
errhandler INOUT MPI error handler; set to MPI_ERRHANDLER_NULL on exit (handle).
ierr OUT Return code (Fortran only).



MPI_Error_class (C) MPI_ERROR_CLASS (Fortran)

MPI_Error_class converts an error code into an error class.

#include <mpi.h>
int MPI_Error_class(int errorcode, int *errorclass)
use mpi
CALL MPI_ERROR_CLASS(errorcode, errorclass, ierr)
INTEGER     :: errorcode, errorclass, ierr
errorcode IN Error code returned by an MPI routine.
errorclass OUT Error class associated with error code.
ierr OUT Return code (Fortran only).



MPI_Error_string (C) MPI_ERROR_STRING (Fortran)

MPI_Error_string returns the string for an error code.

#include <mpi.h>
int MPI_Error_string(int errorcode, char *string, int *resultlen)
use mpi
CALL MPI_ERROR_STRING(errorcode, string, resultlen, ierr)
CHARACTER*(*) :: string
INTEGER       :: errorcode, resultlen, ierr
errorcode IN Error code returned by an MPI routine or error class (integer).
string OUT Text corresponding to error code (string).
resultlen OUT Length of text corresponding to error code (integer).
ierr OUT Return code (Fortran only).



MPI_File_call_errhandler (C) MPI_FILE_CALL_ERRHANDLER (Fortran)

MPI_File_call_errhandler invokes the error handler associated with a file with the error code supplied.

#include <mpi.h>
int MPI_File_call_errhandler(MPI_File fh, int errorcode)
use mpi
CALL MPI_FILE_CALL_ERRHANDLER(fh, errorcode, ierr)
INTEGER     :: fh, errorcode, ierr
fh IN File with error handler (handle).
errorcode IN Error code (integer).
ierr OUT Return code (Fortran only).



MPI_File_create_errhandler (C) MPI_FILE_CREATE_ERRHANDLER (Fortran)

MPI_File_create_errhandler creates an error handler for a file.

#include <mpi.h>
int MPI_File_create_errhandler(MPI_File_errhandler_fn *function,
                               MPI_Errhandler *errhandler)
use mpi
CALL MPI_FILE_CREATE_ERRHANDLER(function, errhandler, ierr)
EXTERNAL    :: function
INTEGER     :: errhandler, ierr
function IN User-defined error handling procedure (function).
errhandler OUT MPI error handler (handle).
ierr OUT Return code (Fortran only).



MPI_File_get_errhandler (C) MPI_FILE_GET_ERRHANDLER (Fortran)

MPI_File_get_errhandler returns the error handler associated with a file.

#include <mpi.h>
int MPI_File_get_errhandler(MPI_File file,
                            MPI_Errhandler *errhandler)
use mpi
CALL MPI_FILE_GET_ERRHANDLER(file, errhandler, ierr)
INTEGER     :: file, errhandler, ierr
file IN File (handle).
errhandler OUT Error handler associated with file (handle).
ierr OUT Return code (Fortran only).



MPI_File_set_errhandler (C) MPI_FILE_SET_ERRHANDLER (Fortran)

MPI_File_set_errhandler attaches a new error handler to a file.

#include <mpi.h>
int MPI_File_set_errhandler(MPI_File file,
                            MPI_Errhandler errhandler)
use mpi
CALL MPI_FILE_SET_ERRHANDLER(file, errhandler, ierr)
INTEGER     :: file, errhandler, ierr
file INOUT File (handle).
errhandler IN New error handler for file (handle).
ierr OUT Return code (Fortran only).



MPI_Finalize (C) MPI_FINALIZE (Fortran)

MPI_Finalize terminates the MPI execution environment.

#include <mpi.h>
int MPI_Finalize()
use mpi
CALL MPI_FINALIZE(ierr)
INTEGER     :: ierr
ierr OUT Return code (Fortran only).



MPI_Finalized (C) MPI_FINALIZED (Fortran)

MPI_Finalized determines whether MPI_Finalize completes successfully.

#include <mpi.h>
int MPI_Finalized(int *flag)
use mpi
CALL MPI_FINALIZED(flag, ierr)
LOGICAL     :: flag
INTEGER     :: ierr
flag OUT True if MPI_FINALIZE completes successfully, false if it does not (logical).
ierr OUT Return code (Fortran only).



MPI_Free_mem (C) MPI_FREE_MEM (Fortran)

MPI_Free_mem deallocates memory previously allocated by MPI_Alloc_mem.

#include <mpi.h>
int MPI_Free_mem(void *base)
use mpi
CALL MPI_FREE_MEM(base, ierr)
<arbitrary> :: base
INTEGER     :: ierr
base IN Initial address of memory segment allocated by MPI_ALLOC_MEM (choice).
ierr OUT Return code (Fortran only).



MPI_Get_library_version(C) MPI_GET_LIBRARY_VERSION(Fortran)

MPI_Get_library_version returns a string representing the version of the MPI library

#include <mpi.h>
int MPI_Get_library_version(char *version, int *resultlen)
use mpi
MPI_GET_LIBRARY_VERSION(version, resulten, ierr)
CHARACTER*(*) :: version
INTEGER       :: resultlen,ierr
version OUT version string (string)
resultlen OUT Length (in printable characters) of the result returned in version (integer)
ierr OUT Return code (Fortran only)



MPI_Get_processor_name (C) MPI_GET_PROCESSOR_NAME (Fortran)

MPI_Get_processor_name returns the name of the processor.

#include <mpi.h>
int MPI_Get_processor_name(char *name, int *resultlen)
use mpi
CALL MPI_GET_PROCESSOR_NAME(name, resultlen, ierr)
CHARACTER*(*) :: name
INTEGER       :: resultlen, ierr
name OUT Unique name for the actual (not virtual) node (string).
resultlen OUT Length of the name (integer).
ierr OUT Return code (Fortran only).



MPI_Get_version (C) MPI_GET_VERSION (Fortran)

MPI_Get_version returns the version of the MPI standard supported by the MPI library.

#include <mpi.h>
int MPI_Get_version(int *version, int *subversion)
use mpi
CALL MPI_GET_VERSION(version, subversion, ierr)
INTEGER     :: version, subversion, ierr
version OUT Version number (integer).
subversion OUT Subversion number (integer).
ierr OUT Return code (Fortran only).



MPI_Init (C) MPI_INIT (Fortran)

MPI_Init initializes the MPI execution environment.

#include <mpi.h>
int MPI_Init(int *argc, char ***argv)
use mpi
CALL MPI_INIT(ierr)
INTEGER     :: ierr
argc IN Pointer to the number of arguments.
argv IN Pointer to the argument vector.
ierr OUT Return code (Fortran only).



MPI_Initialized (C) MPI_INITIALIZED (Fortran)

MPI_Initialized determines whether MPI_Init is called.

#include <mpi.h>
int MPI_Initialized(int *flag)
use mpi
CALL MPI_INITIALIZED(flag, ierr)
LOGICAL     :: flag
INTEGER     :: ierr
flag OUT True if MPI_INIT is called, otherwise false (logical).
ierr OUT Return code (Fortran only).



MPI_Win_call_errhandler (C) MPI_WIN_CALL_ERRHANDLER (Fortran)

MPI_Win_call_errhandler invokes the error handler associated with a window with the error code supplied.

#include <mpi.h>
int MPI_Win_call_errhandler(MPI_Win win, int errorcode)
use mpi
CALL MPI_WIN_CALL_ERRHANDLER(win, errorcode, ierr)
INTEGER     :: win, errorcode, ierr
win IN Window with error handler (handle).
errorcode IN Error code (integer).
ierr OUT Return code (Fortran only).



MPI_Win_create_errhandler (C) MPI_WIN_CREATE_ERRHANDLER (Fortran)

MPI_Win_create_errhandler creates an error handler for a window.

#include <mpi.h>
int MPI_Win_create_errhandler(MPI_Win_errhandler_fn *function,
                              MPI_Errhandler *errhandler)
use mpi
CALL MPI_WIN_CREATE_ERRHANDLER(function, errhandler, ierr)
EXTERNAL    :: function
INTEGER     :: errhandler, ierr
function IN User-defined error handling procedure (function).
errhandler OUT MPI error handler (handle).
ierr OUT Return code (Fortran only).



MPI_Win_get_errhandler (C) MPI_WIN_GET_ERRHANDLER (Fortran)

MPI_Win_get_errhandler returns the error handler associated with a window.

#include <mpi.h>
int MPI_Win_get_errhandler(MPI_Win win,
                           MPI_Errhandler *errhandler)
use mpi
CALL MPI_WIN_GET_ERRHANDLER(win, errhandler, ierr)
INTEGER     :: win, errhandler, ierr
win IN Window object (handle).
errhandler OUT Error handler associated with window (handle).
ierr OUT Return code (Fortran only).



MPI_Win_set_errhandler (C) MPI_WIN_SET_ERRHANDLER (Fortran)

MPI_Win_set_errhandler attaches a new error handler to a window.

#include <mpi.h>
int MPI_Win_set_errhandler(MPI_Win win,
                           MPI_Errhandler errhandler)
use mpi
ffCALL MPI_WIN_SET_ERRHANDLER(win, errhandler, ierr)
INTEGER     :: win, errhandler, ierr
win INOUT Window object (handle).
errhandler OUT New error handler for window (handle).
ierr OUT Return code (Fortran only).



MPI_Wtick (C) MPI_WTICK (Fortran)

MPI_Wtick returns the resolution of MPI_Wtime.

#include <mpi.h>
double MPI_Wtick()
use mpi
DOUBLE PRECISION MPI_Wtick()
Return value
Time in seconds of resolution of MPI_Wtime.



MPI_Wtime (C) MPI_WTIME (Fortran)

MPI_Wtime returns an elapsed time on the calling processor.

#include <mpi.h>
double MPI_Wtime()
use mpi
DOUBLE PRECISION MPI_Wtime()
Return value
Time in seconds since an arbitrary time in the past.


4.7   The Info Object

MPI_Info_create/MPI_INFO_CREATE Creates an info object
MPI_Info_delete/MPI_INFO_DELETE Deletes a (key,value) pair from an info object
MPI_Info_dup/MPI_INFO_DUP Duplicates an info object, including the (key,value) pairs in their original order
MPI_Info_free/MPI_INFO_FREE Frees an info object and sets it to MPI_INFO_NULL
MPI_Info_get/MPI_INFO_GET Returns the value associated with a key in a call to MPI_INFO_SET
MPI_Info_get_nkeys/MPI_INFO_GET_NKEYS Returns the number of defined keys in an info object
MPI_Info_get_nthkey/MPI_INFO_GET_NTHKEY Returns the nth defined key in an info object
MPI_Info_get_valuelen/MPI_INFO_GET_VALUELEN Returns the length of a value associated with a key
MPI_Info_set/MPI_INFO_SET Sets a (key,value) pair for an info object, overwriting any previously set value



MPI_Info_create (C) MPI_INFO_CREATE (Fortran)

MPI_Info_create creates an info object.

#include <mpi.h>
int MPI_Info_create(MPI_Info *info)
use mpi
CALL MPI_INFO_CREATE(info, ierr)
INTEGER     :: info, ierr
info OUT New info object (handle).
ierr OUT Return code (Fortran only).



MPI_Info_delete (C) MPI_INFO_DELETE (Fortran)

MPI_Info_delete deletes a (key,value) pair from an info object.

#include <mpi.h>
int MPI_Info_delete(MPI_Info info, char *key)
use mpi
CALL MPI_INFO_DELETE(info, key, ierr)
CHARACTER*(*) :: key
INTEGER       :: info, ierr
info INOUT Info object (handle).
key IN Key to be deleted (string).
ierr OUT Return code (Fortran only).



MPI_Info_dup (C) MPI_INFO_DUP (Fortran)

MPI_Info_dup duplicates an info object, including the (key,value) pairs in their original order.

#include <mpi.h>
int MPI_Info_dup(MPI_Info info, MPI_Info *newinfo)
use mpi
CALL MPI_INFO_DUP(info, newinfo, ierr)
INTEGER     :: info, newinfo, ierr
info IN Info object (handle).
newinfo OUT Duplicated info object (handle).
ierr OUT Return code (Fortran only).



MPI_Info_free (C) MPI_INFO_FREE (Fortran)

MPI_Info_free frees an info object and sets it to MPI_INFO_NULL.

#include <mpi.h>
int MPI_Info_free(MPI_Info *info)
use mpi
CALL MPI_INFO_FREE(info, ierr)
INTEGER     :: info, ierr
info INOUT Info object (handle).
ierr OUT Return code (Fortran only).



MPI_Info_get (C) MPI_INFO_GET (Fortran)

MPI_Info_get returns the value associated with a key in a call to MPI_INFO_SET.

#include <mpi.h>
int MPI_Info_get(MPI_Info info, char *key, int valuelen,
                 char *value, int *flag)
use mpi
CALL MPI_INFO_GET(info, key, valuelen, value, flag ierr)
CHARACTER*(*) :: key, value
INTEGER       :: info, valuelen,ierr
LOGICAL       :: flag
info IN Info object (handle).
key IN Key (string).
valuelen IN Length of value (integer).
value OUT Value (string).
flag OUT True if key is defined, false if not (logical).
ierr OUT Return code (Fortran only).



MPI_Info_get_nkeys (C) MPI_INFO_GET_NKEYS (Fortran)

MPI_Info_get_nkeys returns the number of defined keys in an info object.

#include <mpi.h>
int MPI_Info_get_nkeys(MPI_Info info, int *nkeys)
use mpi
CALL MPI_INFO_GET_NKEYS(info, nkeys, ierr)
INTEGER     :: info, nkeys, ierr
info IN Info object (handle).
nkeys OUT Number of defined keys (integer).
ierr OUT Return code (Fortran only).



MPI_Info_get_nthkey (C) MPI_INFO_GET_NTHKEY (Fortran)

MPI_Info_get_nthkey returns the nth defined key in an info object.

#include <mpi.h>
int MPI_Info_get_nthkey(MPI_Info info, int n, char *key)
use mpi
CALL MPI_INFO_GET_NTHKEY(info, n, key, ierr)
CHARACTER*(*) :: key
INTEGER       :: info, n, ierr
info IN Info object (handle).
n IN Key number (integer).
key OUT Key (string).
ierr OUT Return code (Fortran only).



MPI_Info_get_valuelen (C) MPI_INFO_GET_VALUELEN (Fortran)

MPI_Info_get_valuelen returns the length of a value associated with a key.

#include <mpi.h>
int MPI_Info_get_valuelen(MPI_Info info, char *key,
                          int *valuelen, int *flag)
use mpi
CALL MPI_INFO_GET_VALUELEN(info, key, valuelen, flag, ierr)
CHARACTER*(*) :: key
INTEGER       :: info, valuelen, ierr
LOGICAL       :: flag
info IN Info object (handle).
key IN Key (string).
valuelen OUT Length of value (integer).
flag OUT True if key is defined, false if not (logical).
ierr OUT Return code (Fortran only).



MPI_Info_set (C) MPI_INFO_SET (Fortran)

MPI_Info_set sets a (key,value) pair for an info object, overwriting any previously set value.

#include <mpi.h>
int MPI_Info_set(MPI_Info info, char *key, char *value)
use mpi
CALL MPI_INFO_SET(info, key, value, ierr)
CHARACTER*(*) :: key, value
INTEGER       :: info,, ierr
info INOUT Info object (handle).
key IN Key (string).
value IN Value (string).
ierr OUT Return code (Fortran only).


4.8   Process Creation and Management

MPI_Close_port/MPI_CLOSE_PORT Releases the network address of a port
MPI_Comm_accept/MPI_COMM_ACCEPT Establishes communication with a client
MPI_Comm_connect/MPI_COMM_CONNECT Establishes communication with a server
MPI_Comm_disconnect/MPI_COMM_DISCONNECT Waits for all pending communication on comm to complete internally, deallocates the communicator object, and sets the handle to MPI_COMM_NULL
MPI_Comm_get_parent/MPI_COMM_GET_PARENT Returns the parent inter-communicator of the current process
MPI_Comm_join/MPI_COMM_JOIN Is used for MPI implementations in an environment that supports the Berkeley Socket Interface
MPI_Comm_spawn/MPI_COMM_SPAWN Starts MPI processes dynamically
MPI_Comm_spawn_multiple/MPI_COMM_SPAWN_MULTIPLE Starts MPI processes dynamically
MPI_Lookup_name/MPI_LOOKUP_NAME Returns a port name published by MPI_Publish_name
MPI_Open_port/MPI_OPEN_PORT Opens a port on the server to accept connections from clients. The port name is supplied by the system using information from the info argument
MPI_Publish_name/MPI_PUBLISH_NAME Publishes a port name and its associated service name
MPI_Unpublish_name/MPI_UNPUBLISH_NAME Unpublishes a port name and its associated service name



MPI_Close_port (C) MPI_CLOSE_PORT (Fortran)

MPI_Close_port releases the network address of a port.

#include <mpi.h>
int MPI_Close_port(char *port_name)
use mpi
CALL MPI_CLOSE_PORT(port_name, ierr)
CHARACTER*(*) :: port_name
INTEGER       :: ierr
port_name IN A port (string).
ierr OUT Return code (Fortran only).



MPI_Comm_accept (C) MPI_COMM_ACCEPT (Fortran)

MPI_Comm_accept establishes communication with a client.

#include <mpi.h>
int MPI_Comm_accept(char *port_name, MPI_Info info, int root,
                    MPI_Comm comm, MPI_Comm *newcomm)
use mpi
CALL MPI_COMM_ACCEPT(port_name, info, root, comm, newcomm, ierr)
CHARACTER*(*) :: port_name
INTEGER       :: info, root, comm, newcomm, ierr
port_name IN A port (string, used only on root).
info IN Implementation-dependent information (handle, used only on root).
root IN Rank in comm of root node (integer).
comm IN Intra-Communicator over which call is collective (handle).
newcomm OUT Inter-Communicator with client as remote group (handle).
ierr OUT Return code (Fortran only).



MPI_Comm_connect (C) MPI_COMM_CONNECT (Fortran)

MPI_Comm_connect establishes communication with a server.

#include <mpi.h>
int MPI_Comm_connect(char *port_name, MPI_Info info, int root,
                     MPI_Comm comm, MPI_Comm *newcomm)
use mpi
CALL MPI_COMM_CONNECT(port_name, info, root, comm, newcomm, ierr)
CHARACTER*(*) :: port_name
INTEGER       :: info, root, comm, newcomm, ierr
port_name IN A port (string, used only on root).
info IN Implementation-dependent information (handle, used only on root).
root IN Rank in comm of root node (integer).
comm IN Intra-Communicator over which call is collective (handle).
newcomm OUT Inter-Communicator with server as remote group (handle).
ierr OUT Return code (Fortran only).



MPI_Comm_disconnect (C) MPI_COMM_DISCONNECT (Fortran)

MPI_Comm_disconnect waits for all pending communication on comm to complete internally, deallocates the communicator object, and sets the handle to MPI_COMM_NULL.

#include <mpi.h>
int MPI_Comm_disconnect(MPI_Comm *comm)
use mpi
CALL MPI_COMM_DISCONNECT(comm, ierr)
INTEGER     :: comm, ierr
comm INOUT Communicator (handle).
ierr OUT Return code (Fortran only).



MPI_Comm_get_parent (C) MPI_COMM_GET_PARENT (Fortran)

MPI_Comm_get_parent returns the parent inter-communicator of the current process.

#include <mpi.h>
int MPI_Comm_get_parent(MPI_Comm *parent)
use mpi
CALL MPI_COMM_GET_PARENT(parent, ierr)
INTEGER     :: parent, ierr
parent OUT Parent communicator (handle).
ierr OUT Return code (Fortran only).



MPI_Comm_join (C) MPI_COMM_JOIN (Fortran)

MPI_Comm_join is used for MPI implementations in an environment that supports the Berkeley Socket Interface.

#include <mpi.h>
int MPI_Comm_join(int fd, MPI_Comm *intercomm)
use mpi
CALL MPI_COMM_JOIN(fd, intercomm, ierr)
INTEGER     :: fd, intercomm, ierr
fd IN Socket file descriptor.
intercomm OUT New inter-communicator (handle).
ierr OUT Return code (Fortran only).



MPI_Comm_spawn (C) MPI_COMM_SPAWN (Fortran)

MPI_Comm_spawn starts MPI processes dynamically.

#include <mpi.h>
int MPI_Comm_spawn(char *command, char **argv, int maxprocs,
                   MPI_Info info, int root, MPI_Comm comm,
                   MPI_Comm *intercomm, int *array_of_errcodes)
use mpi
CALL MPI_COMM_SPAWN(command, argv, maxprocs, info, root, comm
                    intercomm, array_of_errorcodes, ierr)
CHARACTER*(*) :: command, argv(*)
INTEGER       :: maxprocs, info, root, comm
                 intercomm, array_of_errorcodes(*), ierr
command IN Name of program to be spawned (string, significant only at root).
argv IN Arguments for commands (array of strings, significant only at root).
maxprocs IN Maximum number of processes to start (integer, significant only at root).
info IN A set of key value pairs telling the runtime system where and how to start the processes (handle, significant only at root).
root IN Rank of process in which previous arguments are examined (integer).
comm IN Intra-Communicator containing group of spawning processes (handle).
intercomm OUT Inter-Communicator between original group and newly spawned group (handle).
array_of_errorcodes OUT One error code per process (array of integer).
ierr OUT Return code (Fortran only).



MPI_Comm_spawn_multiple (C) MPI_COMM_SPAWN_MULTIPLE (Fortran)

MPI_Comm_spawn_multiple starts MPI processes dynamically.

#include <mpi.h>
int MPI_Comm_spawn_multiple(int count,
                            char **array_of_commands,
                            char ***array_of_argv,
                            int *array_of_maxprocs,
                            MPI_Info *array_of_info,
                            int root,
                            MPI_Comm comm,
                            MPI_Comm *intercomm,
                            int *array_of_errcodes)
use mpi
CALL MPI_COMM_SPAWN_MULTIPLE(count, array_of_commands,
                             array_of_argv,
                             array_of_maxprocs,
                             array_of_info,
                             root, comm, intercomm,
                             array_of_errcodes, ierr)
CHARACTER*(*) :: array_of_commands(*), array_of_argv(*)
INTEGER       :: count, array_of_maxprocs(*),
                 array_of_info(*), root, comm, intercomm,
                 array_of_errcodes(*), ierr
count IN Number of commands (positive integer, significant only at root).
array_of_argv IN Arguments for commands (array of array of strings, significant only at root).
array_of_maxprocs IN Maximum number of processes to start for each command (array of integer, significant only at root).
array_of_info IN Info objects telling the runtime system where and how to start the processes (array of handles, significant only at root).
root IN Rank of process in which previous arguments are examined (integer).
comm IN Intra-Communicator containing group of spawning processes (handle).
intercomm OUT Inter-Communicator between original group and newly spawned group (handle).
array_of_errorcodes OUT One error code per process (array of integer).
ierr OUT Return code (Fortran only).



MPI_Lookup_name (C) MPI_LOOKUP_NAME (Fortran)

MPI_Lookup_name returns a port name published by MPI_Publish_name.

#include <mpi.h>
int MPI_Lookup_name(char *service_name, MPI_Info info,
                    char *port_name)
use mpi
CALL MPI_LOOKUP_NAME(service_name, info, port_name, ierr)
CHARACTER*(*) :: service_name, port_name
INTEGER       :: info, ierr
service_name IN Service name (string).
info IN Implementation-specific information (handle).
port_name OUT Port name (string).
ierr OUT Return code (Fortran only).



MPI_Open_port (C) MPI_OPEN_PORT (Fortran)

MPI_Open_port opens a port on the server to accept connections from clients. The port name is supplied by the system using information from the info argument.

#include <mpi.h>
int MPI_Open_port(MPI_Info info, char *port_name)
use mpi
CALL MPI_OPEN_PORT(info, port_name, ierr)
INTEGER     :: info, port_name, ierr
info IN Implementation-specific information about establishing a network address (handle).
port_name OUT New port (string).
ierr OUT Return code (Fortran only).



MPI_Publish_name (C) MPI_PUBLISH_NAME (Fortran)

MPI_Publish_name publishes a port name and its associated service name.

#include <mpi.h>
int MPI_Publish_name(char *service_name, MPI_Info info,
                     char *port_name)
use mpi
CALL MPI_PUBLISH_NAME(service_name, info, port_name, ierr)
CHARACTER*(*) :: service_name, port_name
INTEGER       :: info, ierr
service_name IN Service name associated with port (string).
info IN Implementation-specific information (handle).
port_name IN Port name (string).
ierr OUT Return code (Fortran only).



MPI_Unpublish_name (C) MPI_UNPUBLISH_NAME (Fortran)

MPI_Unpublish_name unpublishes a port name and its associated service name.

#include <mpi.h>
int MPI_Unpublish_name(char *service_name, MPI_Info info,
                       char *port_name)
use mpi
CALL MPI_UNPUBLISH_NAME(service_name, info, port_name, ierr)
CHARACTER*(*) :: service_name, port_name
INTEGER       :: info, ierr
service_name IN Service name associated with port (string).
info IN Implementation-specific information (handle).
port_name IN Port name (string).
ierr OUT Return code (Fortran only).


4.9   One-Sided Communication

MPI_Accumulate/MPI_ACCUMULATE Accumulates the contents of the origin buffer to the target buffer
MPI_Compare_and_swap/MPI_COMPARE_AND_SWAP Performs a remote atomic compare and swap operation
MPI_Fetch_and_op/MPI_FETCH_AND_OP Performs atomic read-modify-write and return the data before the accumulate operation
MPI_Get/MPI_GET Transfers the contents of the origin buffer to the target buffer
MPI_Get_accumulate/MPI_GET_ACCUMULATE Performs an atomic, one-sided read-and-accumulate operation
MPI_Put/MPI_PUT Transfers the target buffer to the origin buffer
MPI_Raccumulate/MPI_RACCUMULATE Accumulates data into the target process using remote memory access and return a request handle for the operation
MPI_Rget/MPI_RGET Get data from a memory window on a remote process and returns a request handle for the operation
MPI_Rget_accumulate/MPI_RGET_ACCUMULATE Performs an atomic, one-sided read-and-accumulate operation and returns a request handle for the operation
MPI_Rput/MPI_RPUT Puts data into a memory window on a remote process and returns a request handle for the operation
MPI_Win_allocate/MPI_WIN_ALLOCATE Creates and allocates MPI window object for one-sided communication
MPI_Win_allocate_shared/MPI_WIN_ALLOCATE_SHARED Creates an MPI Window object for one-sided communication and shared memory access
MPI_Win_attach/MPI_WIN_ATTACH Attaches memory to a dynamic window
MPI_Win_complete/MPI_WIN_COMPLETE Completes an RMA access epoch
MPI_Win_create/MPI_WIN_CREATE Creates a window object for RMA (one-sided communication) operations
MPI_Win_create_dynamic/MPI_WIN_CREATE_DYNAMIC Creates an MPI Window object for one-sided communication, which allows memory to be dynamically exposed and un-exposed for RMA operations
MPI_Win_detach/MPI_WIN_DETACH Detaches memory from a dynamic window
MPI_Win_fence/MPI_WIN_FENCE Performs fence synchronization
MPI_Win_flush/MPI_WIN_FLUSH Completes all outstanding RMA operations at the given target
MPI_Win_flush_all/MPI_WIN_FLUSH_ALL Completes all outstanding RMA operations at all targets
MPI_Win_flush_local/MPI_WIN_FLUSH_LOCAL Completes locally all outstanding RMA operations at the given target
MPI_Win_flush_local_All/MPI_WIN_FLUSH_LOCAL_ALL Completes locally all outstanding RMA operations at all targets
MPI_Win_free/MPI_WIN_FREE Frees a window object
MPI_Win_get_group/MPI_WIN_GET_GROUP Returns a copy of the group of the communicator used to create a window
MPI_Win_get_info/MPI_WIN_GET_INFO Returns a new info object containing the hints of the window
MPI_Win_lock/MPI_WIN_LOCK Starts an RMA access epoch and locks a window
MPI_Win_lock_all/MPI_WIN_LOCK_ALL Begins an RMA access epoch at all processes on the given window
MPI_Win_post/MPI_WIN_POST Starts an RMA exposure epoch
MPI_Win_set_info/MPI_WIN_SET_INFO Sets new values for the hints of the window
MPI_Win_shared_query/MPI_WIN_SHARED_QUERY Queries the size and base pointer for remote memory segments
MPI_Win_start/MPI_WIN_START Starts an RMA access epoch
MPI_Win_sync/MPI_WIN_SYNC Synchronizes public and private copies of the given window
MPI_Win_test/MPI_WIN_TEST Determines whether RMA operations are complete
MPI_Win_unlock/MPI_WIN_UNLOCK Completes an RMA access epoch and unlocks a window
MPI_Win_unlock_all/MPI_WIN_UNLOCK_ALL Completes an RMA access epoch at all processes on the given window
MPI_Win_wait/MPI_WIN_WAIT Completes an RMA access epoch



MPI_Accumulate (C) MPI_ACCUMULATE (Fortran)

MPI_Accumulate accumulates the contents of the origin buffer to the target buffer.

#include <mpi.h>
int MPI_Accumulate(void *origin_addr, int origin_count,
                   MPI_Datatype origin_datatype,
                   int target_rank,
                   MPI_Aint target_disp,
                   int target_count,
                   MPI_Datatype target_datatype,
                   MPI_Op op, MPI_Win win)
use mpi
CALL MPI_ACCUMULATE(origin_addr, origin_count, origin_datatype,
                    target_rank, target_disp, target_count,
                    target_datatype, op, win, ierr)
<arbitrary>                    :: origin_addr(*)
INTEGER                        :: origin_addr, origin_count,
                                  origin_datatype,
                                  target_rank, target_count,
                                  target_datatype, op, win, ierr
INTEGER(KIND=MPI_ADDRESS_KIND) :: target_disp
origin_addr IN Initial address of origin buffer (choice).
origin_count IN Number of entries in origin buffer (nonnegative integer).
origin_datatype IN Datatype of each entry in origin buffer (handle).
target_rank IN Rank of target (nonnegative integer).
target_disp IN Displacement from start of window to beginning of target buffer (nonnegative integer).
target_count IN Number of entries in target buffer (nonnegative integer)
target_datatype IN Datatype of each entry in target buffer (handle).
op IN Reduce operation (handle).
win IN Window object (handle).
ierr OUT Return code (Fortran only).



MPI_Compare_and_swap (C) MPI_COMPARE_AND_SWAP (Fortran)

MPI_Compare_and_swap performs a remote atomic compare and swap operation.

#include <mpi.h>
int MPI_Compare_and_swap(const void *origin_addr, const void *compare_addr,
                         void *result_addr, MPI_Datatype datatype,int target_rank,
                         MPI_Aint target_disp, MPI_Win win)
use mpi
MPI_COMPARE_AND_SWAP(origin_addr, compare_addr, result_addr, datatype,
                     target_rank, target_disp, win, ierr)
<arbitrary>                    :: origin_addr(*), compare_addr(*), result_addr(*)
INTEGER(KIND=MPI_ADDRESS_KIND) :: target_DISP
INTEGER                        :: datatype, target_rank, win, ierr
origin_addr IN initial address of buffer (choice)
compare_addr IN initial address of compare buer (choice)
result_addr OUT initial address of result buer (choice)
datatype IN datatype of the element in all buers (handle)
target_rank IN rank of target (non-negative integer)
target_disp IN displacement from start of window to beginning of target buer (non-negative integer)
win IN window object (handle)
ierr OUT Return code (Fortran only)



MPI_Fetch_and_op(C) MPI_FETCH_AND_OP(Fortran)

MPI_Fetch_and_op performs atomic read-modify-write and return the data before the accumulate operation

#include <mpi.h>
int MPI_Fetch_and_op(const void *origin_addr, void *result_addr,
                     MPI_Datatype datatype, int target_rank, MPI_Aint target_disp,
                     MPI_Op op, MPI_Win win)
use mpi
MPI_FETCH_AND_OP(origin_addr, result_addr, datatype, target_rank,
                 target_disp, op, win, ierr)
<arbitrary>                    :: origin_addr(*), result_addr(*)
INTEGER(KIND=MPI_ADDRESS_KIND) :: target_disp
INTEGER                        :: datatype, target_rank, op, win, ierr
origin_addr IN initial address of buer (choice)
result_addr OUT initial address of result buer (choice)
datatype IN datatype of the entry in origin, result, and target buffers (handle)
target_rank IN rank of target (non-negative integer)
target_disp IN displacement from start of window to beginning of target buer (non-negative integer)
op IN reduce operation (handle)
win IN window object (handle)
ierr OUT Return code (Fortran only)



MPI_Get (C) MPI_GET (Fortran)

MPI_Get transfers the contents of the origin buffer to the target buffer.

#include <mpi.h>
int MPI_Get(void *origin_addr, int origin_count,
            MPI_Datatype origin_datatype,
            int target_rank,
            MPI_Aint target_disp,
            int target_count,
            MPI_Datatype target_datatype,
            MPI_Win win)
use mpi
CALL MPI_GET(origin_addr, origin_count, origin_datatype,
             target_rank, target_disp, target_count,
             target_datatype, win, ierr)
<arbitrary> :: origin_addr(*)
INTEGER                        :: origin_count, origin_datatype,
                                  target_rank,
                                  target_count, target_datatype,
                                  win, ierr
INTEGER(KIND=MPI_ADDRESS_KIND) :: target_disp
origin_addr OUT Initial address of origin buffer (choice).
origin_count IN Number of entries in origin buffer (nonnegative integer).
origin_type IN Datatype of each entry in origin buffer (handle).
target_rank IN Rank of target (nonnegative integer).
target_disp IN Displacement from start of window to beginning of target buffer (nonnegative integer).
target_count IN Number of entries in target buffer (nonnegative integer)
target_type IN Datatype of each entry in target buffer (handle).
win IN Window object (handle).
ierr OUT Return code (Fortran only).



MPI_Get_accumulate(C) MPI_GET_ACCUMULATE(Fortran)

MPI_Get_accumulate performs an atomic, one-sided read-and-accumulate operation

#include <mpi.h>
int MPI_Get_accumulate(const void *origin_addr, int origin_count,
                       MPI_Datatype origin_datatype, void *result_addr,
                       int result_count, MPI_Datatype result_datatype,
                       int target_rank, MPI_Aint target_disp, int target_count,
                       MPI_Datatype target_datatype, MPI_Op op, MPI_Win win)
use mpi
MPI_GET_ACCUMULATE(origin_addr, origin_count, origin_datatype, result_addr,
                   result_count, result_datatype, target_rank, target_disp,
                   target_count, target_datatype, op, win, ierr)
<arbitrary>                    :: origin_addr(*), result_addr(*)
INTEGER(KIND=MPI_ADDRESS_KIND) :: target_DISP
INTEGER                        :: origin_count, origin_datatype, result_count, result_datatype,
                                  target_rank, target_count, target_datatype, op, win, ierr
origin_addr IN initial address of buer (choice)
origin_count IN number of entries in origin buer (non-negative integer)
origin_datatype IN datatype of each entry in origin buer (handle)
result_addr OUT initial address of result buer (choice)
result_count IN number of entries in result buer (non-negative integer)
result_datatype IN datatype of each entry in result buer (handle)
target_rank IN rank of target (non-negative integer)
target_disp IN displacement from start of window to beginning of target buer (non-negative integer)
target_count IN number of entries in target buer (non-negative integer)
target_datatype IN datatype of each entry in target buer (handle)
op IN reduce operation (handle)
win IN window object (handle)
ierr OUT Return code (Fortran only)



MPI_Put (C) MPI_PUT (Fortran)

MPI_Put transfers the target buffer to the origin buffer.

#include <mpi.h>
int MPI_Put(void *origin_addr, int origin_count,
            MPI_Datatype origin_datatype, int target_rank,
            MPI_Aint target_disp, int target_count,
            MPI_Datatype target_datatype, MPI_Win win)
use mpi
CALL MPI_PUT(origin_addr, origin_count, origin_datatype,
             target_rank, target_disp, target_count,
             target_datatype, win, ierr)
<arbitrary>                    :: origin_addr(*)
INTEGER                        :: origin_count, origin_datatype,
                                  target_rank, target_count,
                                  target_datatype, win, ierr
INTEGER(KIND=MPI_ADDRESS_KIND) :: target_disp
origin_addr IN Initial address of origin buffer (choice).
origin_count IN Number of entries in origin buffer (nonnegative integer).
origin_datatype IN Datatype of each entry in origin buffer (handle).
target_rank IN Rank of target (nonnegative integer).
target_disp IN Displacement from start of window to target buffer (nonnegative integer).
target_count IN Number of entries in target buffer (nonnegative integer).
target_datatype IN Datatype of each entry in target buffer (handle).
win IN Window object used for communication (handle).
ierr OUT Return code (Fortran only).



MPI_Raccumulate(C) MPI_RACCUMULATE(Fortran)

MPI_Raccumulate accumulates data into the target process using remote memory access and return a request handle for the operation

#include <mpi.h>
int MPI_Raccumulate(const void *origin_addr, int origin_count,
                    MPI_Datatype origin_datatype, int target_rank,
                    MPI_Aint target_disp, int target_count,
                    MPI_Datatype target_datatype, MPI_Op op, MPI_Win win,
                    MPI_Request *request)
use mpi
MPI_RACCUMULATE(origin_addr, origin_count, origin_datatype, target_rank,
                target_disp, target_count, target_datatype, op, win, request,
                ierr)
<arbitrary>                    :: origin_addr(*)
INTEGER(KIND=MPI_ADDRESS_KIND) :: target_DISP
INTEGER                        :: origin_count, origin_datatype, target_rank, target_count,
                                  target_datatype, op, win, request, ierr
origin_addr IN initial address of buer (choice)
origin_count IN number of entries in buer (non-negative integer)
origin_datatype IN datatype of each entry in origin buer (handle)
target_rank IN rank of target (non-negative integer)
target_disp IN displacement from start of window to beginning of target buffer (non-negative integer)
target_count IN number of entries in target buer (non-negative integer)
target_datatype IN datatype of each entry in target buer (handle)
op IN reduce operation (handle)
win IN window object (handle)
request OUT RMA request (handle)
ierr OUT Return code (Fortran only)



MPI_Rget(C) MPI_RGET(Fortran)

MPI_Rget transfers data from a memory window on a remote process and returns a request handle for the operation

#include <mpi.h>
int MPI_Rget(void *origin_addr, int origin_count,
             MPI_Datatype origin_datatype, int target_rank,
             MPI_Aint target_disp, int target_count,
             MPI_Datatype target_datatype, MPI_Win win,
             MPI_Request *request)
use mpi
MPI_RGET(origin_addr, origin_count, origin_datatype, target_rank,
         target_disp, target_count, target_datatype, win, request,
         ierr)
<arbitrary>                    :: origin_addr(*)
INTEGER(KIND=MPI_ADDRESS_KIND) :: target_DISP
INTEGER                        :: origin_count, origin_datatype, target_rank, target_count,
                                  target_datatype, win, request, ierr
origin_addr OUT initial address of origin buer (choice)
origin_count IN number of entries in origin buer (non-negative integer)
origin_datatype IN datatype of each entry in origin buer (handle)
target_rank IN rank of target (non-negative integer)
target_disp IN displacement from window start to the beginning of the target buffer (non-negative integer)
target_count IN number of entries in target buffer (non-negative integer)
target_datatype IN datatype of each entry in target buer (handle)
win IN window object (handle)
request OUT RMA request (handle)
ierr OUT Return code (Fortran only)



MPI_Rget_accumulate MPI_RGET_ACCUMULATE(Fortran)

MPI_Rget_accumulate performs an atomic, one-sided read-and-accumulate operation and returns a request handle for the operation

#include <mpi.h>
int MPI_Rget_accumulate(const void *origin_addr, int origin_count,
                        MPI_Datatype origin_datatype, void *result_addr,
                        int result_count, MPI_Datatype result_datatype,
                        int target_rank, MPI_Aint target_disp, int target_count,
                        MPI_Datatype target_datatype, MPI_Op op, MPI_Win win,
                        MPI_Request *request)
use mpi
MPI_RGET_ACCUMULATE(origin_addr, origin_count, origin_datatype,
                    result_addr, result_count, result_datatype, target_rank,
                    target_disp, target_count, target_datatype, op, win, request,
                    ierr)
<arbitrary>                    :: origin_addr(*), result_addr(*)
INTEGER(KIND=MPI_ADDRESS_KIND) :: target_DISP
INTEGER                        :: origin_count, origin_datatype, result_count, result_datatype,
                                  target_rank, target_count, target_datatype, op, win, request, ierr
origin_addr IN initial address of buffer (choice)
origin_count IN number of entries in origin buffer (non-negative integer)
origin_datatype IN datatype of each entry in origin buer (handle)
result_addr OUT initial address of result buffer (choice)
result_count IN number of entries in result buer (non-negative integer)
result_datatype IN datatype of each entry in result buffer (handle)
target_rank IN rank of target (non-negative integer)
target_disp IN displacement from start of window to beginning of target buffer (non-negative integer)
target_count IN number of entries in target buffer (non-negative integer)
target_datatype IN datatype of each entry in target buffer (handle)
op IN reduce operation (handle)
win IN window object (handle)
request OUT RMA request (handle)
ierr OUT Return code (Fortran only)



MPI_Rput(C) MPI_RPUT(Fortran)

MPI_Rput transfers data into a memory window on a remote process and returns a request handle for the operation

#include <mpi.h>
int MPI_Rput(const void *origin_addr, int origin_count,
             MPI_Datatype origin_datatype, int target_rank,
             MPI_Aint target_disp, int target_count,
             MPI_Datatype target_datatype, MPI_Win win,
             MPI_Request *request)
use mpi
MPI_RPUT(origin_addr, origin_count, origin_datatype, target_rank,
         target_disp, target_count, target_datatype, win, request,
         ierr)
<arbitrary>                    :: origin_addr(*)
INTEGER(KIND=MPI_ADDRESS_KIND) :: target_DISP
INTEGER                        :: origin_count, origin_datatype, target_rank,target_count,
                                  target_datatype, win, request, ierr
origin_addr IN initial address of origin buffer (choice)
origin_count IN number of entries in origin buffer (non-negative integer)
origin_datatype IN datatype of each entry in origin buffer (handle)
target_rank IN rank of target (non-negative integer)
target_disp IN displacement from start of window to target buffer (non-negative integer)
target_count IN number of entries in target buffer (non-negative integer)
target_datatype IN datatype of each entry in target buffer (handle)
win IN window object (handle)
request OUT RMA request (handle)
ierr OUT Return code (Fortran only)



MPI_Win_allocate(C) MPI_WIN_ALLOCATE(Fortran)

MPI_Win_allocate creates and allocate MPI window object for one-sided communication

#include <mpi.h>
int MPI_Win_allocate(MPI_Aint size, int disp_unit, MPI_Info info,
                     MPI_Comm comm, void *baseptr, MPI_Win *win)
use mpi
MPI_WIN_ALLOCATE(size, disp_unit, info, comm, baseptr, win, ierr)
INTEGER                        :: disp_unit, info, comm, win, ierr
INTEGER(KIND=MPI_ADDRESS_KIND) :: size, baseptr
size IN size of window in bytes (non-negative integer)
disp_unit IN local unit size for displacements, in bytes (positive integer)
info IN info argument (handle)
comm IN intra-communicator (handle)
baseptr OUT initial address of window (choice)
win OUT window object (handle)
ierr OUT Return code (Fortran only)



MPI_Win_allocate_shared(C) MPI_WIN_ALLOCATE_SHARED(Fortran)

MPI_Win_allocate_shared creates an MPI Window object for one-sided communication and shared memory access

#include <mpi.h>
int MPI_Win_allocate_shared(MPI_Aint size, int disp_unit, MPI_Info info,
                            MPI_Comm comm, void *baseptr, MPI_Win *win)
use mpi
MPI_WIN_ALLOCATE_SHARED(size, disp_unit, info, comm, baseptr, win, ierr)
INTEGER                        :: disp_unit, info, comm, win, ierr
INTEGER(KIND=MPI_ADDRESS_KIND) :: size, baseptr
size IN size of local window in bytes (non-negative integer)
disp_unit IN local unit size for displacements, in bytes (positive integer)
info IN info argument (handle)
comm IN intra-communicator (handle)
baseptr OUT address of local allocated window segment (choice)
win OUT window object (handle)
ierr OUT Return code (Fortran only)



MPI_Win_attach(C) MPI_WIN_ATTACH(Fortran)

MPI_Win_attach attaches memory to a dynamic window

#include <mpi.h>
int MPI_Win_attach(MPI_Win win, void *base, MPI_Aint size)
use mpi
MPI_WIN_ATTACH(win, base, size, ierr)
INTEGER                         :: win, ierr
<arbitrary>                     :: base(*)
INTEGER (KIND=MPI_ADDRESS_KIND) :: size
win IN window object (handle)
base IN initial address of memory to be attached
size IN size of memory to be attached in bytes
ierr OUT Return code (Fortran only)



MPI_Win_complete (C) MPI_WIN_COMPLETE (Fortran)

MPI_Win_complete completes an RMA access epoch.

#include <mpi.h>
int MPI_Win_complete(MPI_Win win)
use mpi
CALL MPI_WIN_COMPLETE(win, ierr)
INTEGER     :: win, ierr
win IN Window object (handle).
ierr OUT Return code (Fortran only).



MPI_Win_create (C) MPI_WIN_CREATE (Fortran)

MPI_Win_create creates a window object for RMA (one-sided communication) operations.

#include <mpi.h>
int MPI_Win_create(void *base, MPI_Aint size, int disp_unit,
                   MPI_Info info, MPI_Comm comm, MPI_Win *win)
use mpi
CALL MPI_WIN_CREATE(base, size, disp_unit, info, comm, win, ierr)
<arbitrary>                    :: base(*)
INTEGER                        :: disp_unit, info, comm, win, ierr
INTEGER(KIND=MPI_ADDRESS_KIND) :: size
base IN Initial address of window (choice).
size IN Size of window in bytes (nonnegative integer).
disp_unit IN Local unit size for displacements in bytes (positive integer).
info IN Info argument providing hints about the expected usage of the window (handle, ignored).
comm IN Communicator (handle).
win OUT Window object (handle).
ierr OUT Return code (Fortran only).



MPI_Win_create_dynamic(C) MPI_WIN_CREATE_DYNAMIC(Fortran)

MPI_Win_create_dynamic creates an MPI Window object for one-sided communication, which allows memory to be dynamically exposed and un-exposed for RMA operations

#include <mpi.h>
int MPI_Win_create_dynamic(MPI_Info info, MPI_Comm comm, MPI_Win *win)
use mpi
MPI_WIN_CREATE_DYNAMIC(info, comm, win, ierr)
INTEGER :: info, comm, win, ierr
info IN info argument (handle)
comm IN intra-communicator (handle)
win OUT window object (handle)
ierr OUT Return code (Fortran only)



MPI_Win_detach(C) MPI_WIN_DETACH(Fortran)

MPI_Win_detach detaches memory from a dynamic window

#include <mpi.h>
int MPI_Win_detach(MPI_Win win, const void *base)
use mpi
MPI_WIN_DETACH(win, base, ierr)
INTEGER           :: win, ierr
<arbitrary>       :: base(*)
win IN window object (handle)
base IN initial address of memory to be detached
ierr OUT Return code (Fortran only)



MPI_Win_fence (C) MPI_WIN_FENCE (Fortran)

MPI_Win_fence performs fence synchronization.

#include <mpi.h>
int MPI_Win_fence(int assert, MPI_Win win)
use mpi
CALL MPI_WIN_FENCE(assert, win, ierr)
INTEGER     :: assert, win, ierr
assert IN Program assertion (integer).
win IN Window object (handle).
ierr OUT Return code (Fortran only).



MPI_Win_flush(C) MPI_WIN_FLUSH(Fortran)

MPI_Win_flush completes all outstanding RMA operations at the given target

#include <mpi.h>
int MPI_Win_flush(int rank, MPI_Win win)
use mpi
MPI_WIN_FLUSH(rank, win, ierr)
INTEGER :: rank, win, ierr
rank IN rank of target window (non-negative integer)
win IN window object (handle)
ierr OUT Return code (Fortran only)



MPI_Win_flush_all(C) MPI_WIN_FLUSH_ALL(Fortran)

MPI_Win_flush_all completes all outstanding RMA operations at all targets

#include <mpi.h>
int MPI_Win_flush_all(MPI_Win win)
use mpi
MPI_WIN_FLUSH_ALL(win, ierr)
INTEGER :: win, ierr
win IN window object (handle)
ierr OUT Return code (Fortran only)



MPI_Win_flush_local(C) MPI_WIN_FLUSH_LOCLA(Fortran)

MPI_Win_flush_local completes locally all outstanding RMA operations at the given target

#include <mpi.h>
int MPI_Win_flush_local(int rank, MPI_Win win)
use mpi
MPI_WIN_FLUSH_LOCAL(rank, win, ierr)
INTEGER :: rank, win, ierr
rank IN rank of target window (non-negative integer)
win IN window object (handle)
ierr OUT Return code (Fortran only)



MPI_Win_flush_local_all(C) MPI_WIN_FLUSH_LOCAL_ALL(Fortran)

MPI_Win_flush_local_all completes locally all outstanding RMA operations at all targets

#include <mpi.h>
int MPI_Win_flush_local_all(MPI_Win win)
use mpi
MPI_WIN_FLUSH_LOCAL_ALL(win, ierr)
INTEGER :: win, ierr
win IN window object (handle)
ierr OUT Return code (Fortran only)



MPI_Win_free (C) MPI_WIN_FREE (Fortran)

MPI_Win_free frees a window object.

#include <mpi.h>
int MPI_Win_free(MPI_Win *win)
use mpi
CALL MPI_WIN_FREE(win, ierr)
INTEGER     :: win, ierr
win INOUT Window object (handle).
ierr OUT Return code (Fortran only).



MPI_Win_get_group (C) MPI_WIN_GET_GROUP (Fortran)

MPI_Win_get_group returns a copy of the group of the communicator used to create a window.

#include <mpi.h>
int MPI_Win_get_group(MPI_Win win, MPI_group *group)
use mpi
CALL MPI_WIN_GET_GROUP(win, group, ierr)
INTEGER     :: win, group, ierr
win IN Window object (handle).
group OUT Group of processes that share access to window (handle).
ierr OUT Return code (Fortran only).



MPI_Win_get_info(C) MPI_WIN_GET_INFO(Fortran)

MPI_Win_get_info returns a new info object containing the hints of the window

#include <mpi.h>
int MPI_Win_get_info(MPI_Win win, MPI_Info *info_used)
use mpi
MPI_WIN_GET_INFO(win, info_used, ierr)
INTEGER :: win, info_used, ierr
win IN window object (handle)
info_used OUT new info object (handle)
ierr OUT Return code (Fortran only)



MPI_Win_lock (C) MPI_WIN_LOCK (Fortran)

MPI_Win_lock starts an RMA access epoch and locks a window.

#include <mpi.h>
int MPI_Win_lock(int lock_type, int rank, int assert, MPI_Win win)
use mpi
CALL MPI_WIN_LOCK(lock_type, rank, assert, win, ierr)
INTEGER     :: lock_type, rank, assert, win, ierr
lock_type IN MPI_LOCK_EXCLUSIVE or MPI_LOCK_SHARED (state).
rank IN Rank of locked window (nonnegative integer).
assert IN Program assertion. The only valid assertion is MPI_MODE_NOCHECK, which has no effect (integer).
win IN Window object (handle).
ierr OUT Return code (Fortran only).



MPI_Win_lock_all(C) MPI_WIN_LOCK_ALLC(Fortran)

MPI_Win_lock_all begins an RMA access epoch at all processes on the given window

#include <mpi.h>
int MPI_Win_lock_all(int assert, MPI_Win win)
use mpi
MPI_WIN_LOCK_ALL(assert, win, ierr)
INTEGER :: assert, win, ierr
assert IN program assertion (integer)
win IN window object (handle)
ierr OUT Return code (Fortran only)



MPI_Win_post (C) MPI_WIN_POST (Fortran)

MPI_Win_post starts an RMA exposure epoch.

#include <mpi.h>
int MPI_Win_post(MPI_Group group, int assert, MPI_Win win)
use mpi
CALL MPI_WIN_POST(group, assert, win, ierr)
INTEGER     :: group, assert, win, ierr
group IN Group of origin processes (handle).
assert IN Program assertion (integer).
win IN Window object (handle).
ierr OUT Return code (Fortran only).



MPI_Win_set_info(C) MPI_WIN_SET_INFO(Fortran)

MPI_Win_set_info sets new values for the hints of the window

#include <mpi.h>
int MPI_Win_set_info(MPI_Win win, MPI_Info info)
use mpi
MPI_WIN_SET_INFO(win, info, ierr)
INTEGER :: win, info, ierr
win INOUT window object (handle)
info IN info object (handle)
ierr OUT Return code (Fortran only)



MPI_Win_shared_query(C) MPI_WIN_SHARED_QUERY(Fortran)

MPI_Win_shared_query queries the size and base pointer for remote memory segments

#include <mpi.h>
int MPI_Win_shared_query(MPI_Win win, int rank, MPI_Aint *size,
                         int *disp_unit, void *baseptr)
use mpi
MPI_WIN_SHARED_QUERY(win, rank, size, disp_unit, baseptr, ierr)
INTEGER                         :: win, rank, disp_unit, ierr
INTEGER (KIND=MPI_ADDRESS_KIND) :: size, baseptr
win IN window object (handle)
rank IN rank in the group of window win (non-negative integer) or MPI_PROC_NULL
size OUT size of the window segment (non-negative integer)
disp_unit OUT local unit size for displacements, in bytes (positive integer)
baseptr OUT address for load/store access to window segment (choice)
ierr OUT Return code (Fortran only)



MPI_Win_start (C) MPI_WIN_START (Fortran)

MPI_Win_start starts an RMA access epoch.

#include <mpi.h>
int MPI_Win_start(MPI_Group group, int assert, MPI_Win win)
use mpi
CALL MPI_WIN_START(group, assert, win, ierr)
INTEGER     :: group, assert, win, ierr
group IN Group of target processes (handle).
assert IN Program assertion. assert=0 is always valid. Another valid assertion is MPI_MODE_NOCHECK (integer).
win IN Window object (handle).
ierr OUT Return code (Fortran only).



MPI_Win_sync(C) MPI_WIN_SYNC(Fortran)

MPI_Win_sync synchronizes public and private copies of the given window

#include <mpi.h>
int MPI_Win_sync(MPI_Win win)
use mpi
MPI_WIN_SYNC(win, ierr)
INTEGER :: win, ierr
win IN window object (handle)
ierr OUT Return code (Fortran only)



MPI_Win_test (C) MPI_WIN_TEST (Fortran)

MPI_Win_test determines whether RMA operations are complete.

#include <mpi.h>
int MPI_Win_test(MPI_Win win, int *flag)
use mpi
CALL MPI_WIN_TEST(win, flag, ierr)
LOGICAL     :: flag
INTEGER     :: win, ierr
win IN Window object (handle).
flag OUT True if MPI_WIN_WAIT returns, otherwise false (logical).
ierr OUT Return code (Fortran only).



MPI_Win_unlock (C) MPI_WIN_UNLOCK (Fortran)

MPI_Win_unlock completes an RMA access epoch and unlocks a window.

#include <mpi.h>
int MPI_Win_unlock(int rank, MPI_Win win)
use mpi
CALL MPI_WIN_UNLOCK(rank, win, ierr)
INTEGER     :: rank, win, ierr
rank IN Rank of window (nonnegative integer).
win IN Window object (handle).
ierr OUT Return code (Fortran only).



MPI_Win_unlock_all(C) MPI_WIN_UNLOCK_ALL(Fortran)

MPI_Win_unlock_all completes an RMA access epoch at all processes on the given window

#include <mpi.h>
int MPI_Win_unlock_all(MPI_Win win)
use mpi
MPI_WIN_UNLOCK_ALL(win, ierr)
INTEGER :: win, ierr
win IN window object (handle)
ierr OUT Return code (Fortran only)



MPI_Win_wait (C) MPI_WIN_WAIT (Fortran)

MPI_Win_wait completes an RMA access epoch.

#include <mpi.h>
int MPI_Win_wait(MPI_Win win)
use mpi
CALL MPI_WIN_WAIT(win, ierr)
INTEGER     :: win, ierr
win IN Window object (handle).
ierr OUT Return code (Fortran only).


4.10   External Interfaces

MPI_Grequest_complete/MPI_GREQUEST_COMPLETE Informs MPI that the operations represented by a generalized request are complete
MPI_Grequest_start/MPI_GREQUEST_START Starts a new generalized request
MPI_Init_thread/MPI_INIT_THREAD Initializes the MPI execution environment in a manner similar to MPI_Init
MPI_Is_thread_main/MPI_IS_THREAD_MAIN Determines whether the calling thread is the main thread
MPI_Query_thread/MPI_QUERY_THREAD Returns the current level of thread support
MPI_Status_set_cancelled/MPI_STATUS_SET_CANCELLED Cancels a status value
MPI_Status_set_elements/MPI_STATUS_SET_ELEMENTS Modifies the opaque part of status
MPI_Status_set_elements_x/MPI_STATUS_SET_ELEMENTS_X Modifies the opaque part of status



MPI_Grequest_complete (C) MPI_GREQUEST_COMPLETE (Fortran)

MPI_Grequest_complete informs MPI that the operations represented by a generalized request are complete.

#include <mpi.h>
int MPI_Grequest_complete(MPI_Request request)
use mpi
CALL MPI_GREQUEST_COMPLETE(request, ierr)
INTEGER     :: request, ierr
request INOUT Generalized request (handle).
ierr OUT Return code (Fortran only).



MPI_Grequest_start (C) MPI_GREQUEST_START (Fortran)

MPI_Grequest_start starts a new generalized request.

#include <mpi.h>
int MPI_Grequest_start(MPI_Grequest_query_function *query_fn,
                       MPI_Grequest_free_function *free_fn,
                       MPI_Grequest_cancel_function *cancel_fn,
                       void *extra_state, MPI_Request *request)
use mpi
CALL MPI_GREQUEST_START(query_fn, free_fn, cancel_fn,
                        extra_state, request, ierr)
EXTERNAL                       :: query_fn, free_fn, cancel_fn
INTEGER                        :: request, ierr
INTEGER(KIND=MPI_ADDRESS_KIND) :: extra_state
query_fn IN Callback function invoked when request status is queried (function).
free_fn IN Callback function invoked when request is freed (function).
cancel_fn IN Callback function invoked when request is cancelled (function).
extra_state IN Extra state.
request OUT Generalized request (handle).
ierr OUT Return code (Fortran only).



MPI_Init_thread (C) MPI_INIT_THREAD (Fortran)

MPI_Init_thread initializes the MPI execution environment in a manner similar to MPI_Init.

#include <mpi.h>
int MPI_Init_thread(int *argc, char ***argv, int required,
                    int *provided)
use mpi
CALL MPI_INIT_THREAD(required, provided, ierr)
INTEGER     :: required, provided, ierr
argc IN Pointer to the number of arguments.
argv IN Pointer to the argument vector.
required IN Desired level of thread support (integer).
provided OUT Provided level of thread support (integer).
ierr OUT Return code (Fortran only).



MPI_Is_thread_main (C) MPI_IS_THREAD_MAIN (Fortran)

MPI_Is_thread_main determines whether the calling thread is the main thread.

#include <mpi.h>
int MPI_Is_thread_main(int *flag)
use mpi
CALL MPI_IS_THREAD_MAIN(flag, ierr)
LOGICAL     :: flag
INTEGER     :: ierr
flag OUT True if calling thread is main thread, otherwise false (logical).
ierr OUT Return code (Fortran only).



MPI_Query_thread (C) MPI_QUERY_THREAD (Fortran)

MPI_Query_thread returns the current level of thread support.

#include <mpi.h>
int MPI_Query_thread(int *provided)
use mpi
CALL MPI_QUERY_THREAD(provided, ierr)
INTEGER     :: provided, ierr
provided OUT Provided level of thread support (integer).
ierr OUT Return code (Fortran only).



MPI_Status_set_cancelled (C) MPI_STATUS_SET_CANCELLED (Fortran)

MPI_Status_set_cancelled cancels a status value.

#include <mpi.h>
int MPI_Status_set_cancelled(MPI_Status *status, int flag)
use mpi
CALL MPI_STATUS_SET_CANCELLED(status, flag, ierr)
LOGICAL     :: flag
INTEGER     :: status(MPI_STATUS_SIZE), ierr
status INOUT Status value to cancel (status)
flag IN True if status is cancelled, otherwise false (logical).
ierr OUT Return code (Fortran only).



MPI_Status_set_elements (C) MPI_STATUS_SET_ELEMENTS (Fortran)

MPI_Status_set_elements modifies the opaque part of status.

#include <mpi.h>
int MPI_Status_set_elements(MPI_Status *status,
                            MPI_Datatype datatype, int count)
use mpi
CALL MPI_STATUS_SET_ELEMENTS(status, datatype, count, ierr)
INTEGER     :: status(MPI_STATUS_SIZE), datatype, count, ierr
status INOUT Status with which to associate count (status).
datatype IN Datatype associated with count (handle).
count IN Number of elements to associate with status (integer).
ierr OUT Return code (Fortran only).



MPI_Status_set_element_x MPI_STATUS_SET_ELEMENT_X

MPI_Status_set_element_x modifies the opaque part of status

#include <mpi.h>
int MPI_Status_set_elements_x(MPI_Status *status, MPI_Datatype datatype,
                              MPI_Count count)
use mpi
MPI_STATUS_SET_ELEMENTS_X(STATUS, datatype, count, ierr)
INTEGER                       :: status(MPI_STATUS_SIZE), datatype, ierr
INTEGER (KIND=MPI_COUNT_KIND) :: count
status INOUT status with which to associate count (Status)
datatype IN datatype associated with count (handle)
count IN number of elements to associate with status (integer)
ierr OUT Return code (Fortran only).


4.11   I/O

MPI_File_close/MPI_FILE_CLOSE Closes a file associated with fh
MPI_File_delete/MPI_FILE_DELETE Deletes a file
MPI_File_get_amode/MPI_FILE_GET_AMODE Returns the access mode of a file associated with fh
MPI_File_get_atomicity/MPI_FILE_GET_ATOMICITY Returns the current consistency semantics
MPI_File_get_byte_offset/MPI_FILE_GET_BYTE_OFFSET Converts a view-relative offset into an absolute byte position
MPI_File_get_group/MPI_FILE_GET_GROUP Returns a duplicate of the group of the communicator used to open a file
MPI_File_get_info/MPI_FILE_GET_INFO Returns a new info object containing the hints of a file
MPI_File_get_position/MPI_FILE_GET_POSITION Returns the current position of the individual file pointer in etype units relative to the current view
MPI_File_get_position_shared/MPI_FILE_GET_POSITION_SHARED Returns the current position of the shared file pointer in etype units relative to the current view
MPI_File_get_size/MPI_FILE_GET_SIZE Returns the current size of a file in bytes
MPI_File_get_type_extent/MPI_FILE_GET_TYPE_EXTENT Returns the extent of a datatype in a file
MPI_File_get_view/MPI_FILE_GET_VIEW Returns the process's view of the data in a file
MPI_File_iread/MPI_FILE_IREAD Reads data using the individual file pointer
MPI_File_iread_at/MPI_FILE_IREAD_AT Reads data using an explicit offset
MPI_File_iread_shared/MPI_FILE_IREAD_SHARED Reads data using the shared file pointer
MPI_File_iwrite/MPI_FILE_IWRITE Writes data using the individual file pointer
MPI_File_iwrite_at/MPI_FILE_IWRITE_AT Writes data using an explicit offset
MPI_File_iwrite_shared/MPI_FILE_IWRITE_SHARED Writes data using the shared file pointer
MPI_File_open/MPI_FILE_OPEN Opens a file
MPI_File_preallocate/MPI_FILE_PREALLOCATE Ensures that storage space is allocated
MPI_File_read/MPI_FILE_READ Reads a file using the individual file pointer
MPI_File_read_all/MPI_FILE_READ_ALL Reads data collectively using the individual file pointer
MPI_File_read_all_begin/MPI_FILE_READ_ALL_BEGIN Begins a nonblocking collective read of all processes associated with a file handle
MPI_File_read_all_end/MPI_FILE_READ_ALL_END Completes a nonblocking collective read of all processes associated with a file handle
MPI_File_read_at/MPI_FILE_READ_AT Reads a file using an explicit offset
MPI_File_read_at_all/MPI_FILE_READ_AT_ALL Reads data collectively using an explicit offset
MPI_File_read_at_all_begin/MPI_FILE_READ_AT_ALL_BEGIN Begins a nonblocking collective read of all processes associated with a file handle. The read begins at an explicit offset
MPI_File_read_at_all_end/MPI_FILE_READ_AT_ALL_END Completes a nonblocking collective read started by MPI_File_read_at_all_begin
MPI_File_read_ordered/MPI_FILE_READ_ORDERED Reads data collectively in rank order using the shared file pointer
MPI_File_read_ordered_begin/MPI_FILE_READ_ORDERED_BEGIN Begins a nonblocking collective ordered read of all processes associated with a file handle
MPI_File_read_ordered_end/MPI_FILE_READ_ORDERED_END Completes a nonblocking collective ordered read started by MPI_File_read_ordered_begin
MPI_File_read_shared/MPI_FILE_READ_SHARED Reads a file using the shared file pointer
MPI_File_seek/MPI_FILE_SEEK Updates the individual file pointer
MPI_File_seek_shared/MPI_FILE_SEEK_SHARED Updates the shared file pointer
MPI_File_set_atomicity/MPI_FILE_SET_ATOMICITY Sets the consistency semantics
MPI_File_set_info/MPI_FILE_SET_INFO Sets new values for the hints of a file
MPI_File_set_size/MPI_FILE_SET_SIZE Resizes a file
MPI_File_set_view/MPI_FILE_SET_VIEW Changes a process's view
MPI_File_sync/MPI_FILE_SYNC Transfers data to a storage device
MPI_File_write/MPI_FILE_WRITE Writes a file using the individual file pointer
MPI_File_write_all/MPI_FILE_WRITE_ALL Writes data collectively using the individual file pointer
MPI_File_write_all_begin/MPI_FILE_WRITE_ALL_BEGIN Ted with a file handle
MPI_File_write_all_end/MPI_FILE_WRITE_ALL_END Completes a nonblocking collective write started by MPI_File_write_all_begin
MPI_File_write_at/MPI_FILE_WRITE_AT Writes a file using an explicit offset
MPI_File_write_at_all/MPI_FILE_WRITE_AT_ALL Writes data collectively using an explicit offset
MPI_File_write_at_all_begin/MPI_FILE_WRITE_AT_ALL_BEGIN Begins a nonblocking collective write of all processes associated with a file handle. The write begins at an explicit offset
MPI_File_write_at_all_end/MPI_FILE_WRITE_AT_ALL_END Completes a nonblocking collective write started by MPI_File_write_at_all_begin
MPI_File_write_ordered/MPI_FILE_WRITE_ORDERED Writes data collectively in rank order using the shared file pointer
MPI_File_write_ordered_begin/MPI_FILE_WRITE_ORDERED_BEGIN Begins a nonblocking collective ordered write of all processes associated with a file handle
MPI_File_write_ordered_end/MPI_FILE_WRITE_ORDERED_END Completes a nonblocking collective ordered write started by MPI_File_write_ordered_begin
MPI_File_write_shared/MPI_FILE_WRITE_SHARED Writes a file using the shared file pointer
MPI_Register_datarep/MPI_REGISTER_DATAREP Registers data conversion functions
MPI_File_iread_at_all/MPI_FILE_IREAD_AT_ALL A nonblocking version of MPI_FILE_READ_AT_ALL
MPI_File_iwrite_at_all/MPI_FILE_IWRITE_AT_ALL A nonblocking version of MPI_FILE_WRITE_AT_ALL
MPI_File_iread_all/MPI_FILE_IREAD_ALL A nonblocking version of MPI_FILE_READ_ALL
MPI_File_iwrite_all/MPI_FILE_IWRITE_ALL A nonblocking version of MPI_FILE_WRITE_ALL



MPI_File_close (C) MPI_FILE_CLOSE (Fortran)

MPI_File_close closes a file associated with fh.

#include <mpi.h>
int MPI_File_close(MPI_File *fh)
use mpi
CALL MPI_FILE_CLOSE(fh, ierr)
INTEGER     :: fh, ierr
fh INOUT File handle (handle).
ierr OUT Return code (Fortran only).



MPI_File_delete (C) MPI_FILE_DELETE (Fortran)

MPI_File_delete deletes a file.

#include <mpi.h>
int MPI_File_delete(char *filename, MPI_Info info)
use mpi
CALL MPI_FILE_DELETE(filename, info, ierr)
CHARACTER*(*) :: filename
INTEGER       :: info, ierr
filename IN Name of file to delete (string).
info IN Info object (handle).
ierr OUT Return code (Fortran only).



MPI_File_get_amode (C) MPI_FILE_GET_AMODE (Fortran)

MPI_File_get_amode returns the access mode of a file associated with fh.

#include <mpi.h>
int MPI_File_get_amode(MPI_File fh, int *amode)
use mpi
CALL MPI_FILE_GET_AMODE(fh, amode, ierr)
INTEGER     :: fh, amode, ierr
fh IN File handle (handle).
amode OUT File access mode used to open file (integer).
ierr OUT Return code (Fortran only).



MPI_File_get_atomicity (C) MPI_FILE_GET_ATOMICITY (Fortran)

MPI_File_get_atomicity returns the current consistency semantics.

#include <mpi.h>
int MPI_File_get_atomicity(MPI_File fh, int *flag)
use mpi
CALL MPI_FILE_GET_ATOMICITY(fh, flag, ierr)
INTEGER     :: fh, ierr
LOGICAL     :: flag
fh IN File handle (handle).
flag OUT True if atomic mode, false if nonatomic mode (logical).
ierr OUT Return code (Fortran only).



MPI_File_get_byte_offset (C) MPI_FILE_GET_BYTE_OFFSET (Fortran)

MPI_File_get_byte_offset converts a view-relative offset into an absolute byte position.

#include <mpi.h>
int MPI_File_get_byte_offset(MPI_File fh, MPI_Offset offset,
                             MPI_Offset *disp)
use mpi
CALL MPI_FILE_GET_BYTE_OFFSET(fh, offset, disp, ierr)
INTEGER                       :: fh, ierr
INTEGER(KIND=MPI_OFFSET_KIND) :: offset, disp
fh IN File handle (handle).
offset IN Offset (integer).
disp OUT Absolute byte position of offset (integer).
ierr OUT Return code (Fortran only).



MPI_File_get_group (C) MPI_FILE_GET_GROUP (Fortran)

MPI_File_get_group returns a duplicate of the group of the communicator used to open a file.

#include <mpi.h>
int MPI_File_get_group(MPI_File fh, MPI_Group *group)
use mpi
CALL MPI_FILE_GET_GROUP(fh, group, ierr)
INTEGER     :: fh, group, ierr
fh IN File handle (handle).
group OUT Group that opened file (handle).
ierr OUT Return code (Fortran only).



MPI_File_get_info (C) MPI_FILE_GET_INFO (Fortran)

MPI_File_get_info returns a new info object containing the hints of a file.

#include <mpi.h>
int MPI_File_get_info(MPI_File fh, MPI_Info *info_used)
use mpi
CALL MPI_FILE_GET_INFO(fh, info_used, ierr)
INTEGER     :: fh, info_used, ierr
fh IN File handle (handle).
info_used OUT New info object (handle).
ierr OUT Return code (Fortran only).



MPI_File_get_position (C) MPI_FILE_GET_POSITION (Fortran)

MPI_File_get_position returns the current position of the individual file pointer in etype units relative to the current view.

#include <mpi.h>
int MPI_File_get_position(MPI_File fh, MPI_Offset *offset)
use mpi
CALL MPI_FILE_GET_POSITION(fh, offset, ierr)
INTEGER                       :: fh, ierr
INTEGER(KIND=MPI_OFFSET_KIND) :: offset
fh IN File handle (handle).
offset OUT Offset of individual pointer (integer).
ierr OUT Return code (Fortran only).



MPI_File_get_position_shared (C) MPI_FILE_GET_POSITION_SHARED (Fortran)

MPI_File_get_position_shared returns the current position of the shared file pointer in etype units relative to the current view.

#include <mpi.h>
int MPI_File_get_position_shared(MPI_File fh, MPI_Offset *offset)
use mpi
CALL MPI_FILE_GET_POSITION_SHARED(fh, offset, ierr)
INTEGER                       :: fh, ierr
INTEGER(KIND=MPI_OFFSET_KIND) :: offset
fh IN File handle (handle).
offset OUT Offset of shared pointer (integer).
ierr OUT Return code (Fortran only).



MPI_File_get_size (C) MPI_FILE_GET_SIZE (Fortran)

MPI_File_get_size returns the current size of a file in bytes.

#include <mpi.h>
int MPI_File_get_size(MPI_File fh, MPI_Offset *size)
use mpi
CALL MPI_FILE_GET_SIZE(fh, size, ierr)
INTEGER                       :: fh, ierr
INTEGER(KIND=MPI_OFFSET_KIND) :: size
fh IN File handle (handle).
size OUT Size of the file in bytes (integer).
ierr OUT Return code (Fortran only).



MPI_File_get_type_extent (C) MPI_FILE_GET_TYPE_EXTENT (Fortran)

MPI_File_get_type_extent returns the extent of a datatype in a file.

#include <mpi.h>
int MPI_File_get_type_extent(MPI_File fh, MPI_Datatype datatype,
                             MPI_Aint *extent)
use mpi
CALL MPI_FILE_GET_TYPE_EXTENT(fh, datatype, extent, ierr)
INTEGER                        :: fh, datatype, ierr
INTEGER(KIND=MPI_ADDRESS_KIND) :: extent
fh IN File handle (handle).
datatype IN Datatype (handle).
extent OUT Datatype extent (integer).
ierr OUT Return code (Fortran only).



MPI_File_get_view (C) MPI_FILE_GET_VIEW (Fortran)

MPI_File_get_view returns the process's view of the data in a file.

#include <mpi.h>
int MPI_File_get_view(MPI_File fh, MPI_Offset *disp,
                      MPI_Datatype *etype, MPI_Datatype *filetype,
                      char *datarep)
use mpi
CALL MPI_FILE_GET_VIEW(fh, disp, etype, filetype, datarep,
                       ierr)
CHARACTER*(*)                 :: datarep
INTEGER                       :: fh, etype, filetype, ierr
INTEGER(KIND=MPI_OFFSET_KIND) :: disp
fh IN File handle (handle).
disp OUT Displacement (integer).
etype OUT Elementary datatype (handle).
filetype OUT Filetype (handle).
datarep OUT Data representation (string).
ierr OUT Return code (Fortran only).



MPI_File_iread (C) MPI_FILE_IREAD (Fortran)

MPI_File_iread reads data using the individual file pointer.

#include <mpi.h>
int MPI_File_iread(MPI_File fh, void *buf, int count,
                   MPI_Datatype datatype, MPI_Request *request)
use mpi
CALL MPI_FILE_IREAD(fh, buf, count, datatype, request, ierr)
<arbitrary> :: buf(*)
INTEGER     :: fh, count, datatype, request, ierr
fh INOUT File handle (handle).
buf OUT Initial address of buffer (choice).
count IN Number of elements in buffer (integer).
datatype IN Datatype of each buffer element (handle).
request OUT Request object (handle).
ierr OUT Return code (Fortran only).



MPI_File_iread_at (C) MPI_FILE_IREAD_AT (Fortran)

MPI_File_iread_at reads data using an explicit offset.

#include <mpi.h>
int MPI_File_iread_at(MPI_File fh, MPI_Offset offset, void *buf,
                      int count, MPI_Datatype datatype,
                      MPI_Request *request)
use mpi
CALL MPI_FILE_IREAD_AT(fh, offset, buf, count, datatype, request,
                       ierr)
<arbitrary>                   :: buf(*)
INTEGER                       :: fh, count, datatype, request, ierr
INTEGER(KIND=MPI_OFFSET_KIND) :: offset
fh IN File handle (handle).
offset IN File offset (integer).
buf OUT Initial address of buffer (choice).
count IN Number of elements in buffer (integer).
datatype IN Datatype of each buffer element (handle).
request OUT Request object (handle).
ierr OUT Return code (Fortran only).



MPI_File_iread_shared (C) MPI_FILE_IREAD_SHARED (Fortran)

MPI_File_iread_shared reads data using the shared file pointer.

#include <mpi.h>
int MPI_File_iread_shared(MPI_File fh, void *buf, int count,
                          MPI_Datatype datatype, MPI_Request *request)
use mpi
CALL MPI_FILE_IREAD_SHARED(fh, buf, count, datatype, request,
                           ierr)
<arbitrary> :: buf(*)
INTEGER     :: fh, count, datatype, request, ierr
fh INOUT File handle (handle).
buf OUT Initial address of buffer (choice).
count IN Number of elements in buffer (integer).
datatype IN Datatype of each buffer element (handle).
request OUT Request object (handle).
ierr OUT Return code (Fortran only).



MPI_File_iwrite (C) MPI_FILE_IWRITE (Fortran)

MPI_File_iwrite writes data using the individual file pointer.

#include <mpi.h>
int MPI_File_iwrite(MPI_File fh, void *buf, int count,
                    MPI_Datatype datatype, MPI_Request *request)
use mpi
CALL MPI_FILE_IWRITE(fh, buf, count, datatype, request, ierr)
<arbitrary> :: buf(*)
INTEGER     :: fh, count, datatype, request, ierr
fh INOUT File handle (handle).
buf IN Initial address of buffer (choice).
count IN Number of elements in buffer (integer).
datatype IN Datatype of each buffer element (handle).
request OUT Request object (handle).
ierr OUT Return code (Fortran only).



MPI_File_iwrite_at (C) MPI_FILE_IWRITE_AT (Fortran)

MPI_File_iwrite_at writes data using an explicit offset.

#include <mpi.h>
int MPI_File_iwrite_at(MPI_File fh, MPI_Offset offset, void *buf,
                       int count, MPI_Datatype datatype,
                       MPI_Request *request)
use mpi
CALL MPI_FILE_IWRITE_AT(fh, offset, buf, count, datatype,
                        request, ierr)
<arbitrary>                   :: buf(*)
INTEGER                       :: fh, count, datatype, request, ierr
INTEGER(KIND=MPI_OFFSET_KIND) :: offset
fh INOUT File handle (handle).
offset IN File offset (integer).
buf IN Initial address of buffer (choice).
count IN Number of elements in buffer (integer).
datatype IN Datatype of each buffer element (handle).
request OUT Request object (handle).
ierr OUT Return code (Fortran only).



MPI_File_iwrite_shared (C) MPI_FILE_IWRITE_SHARED (Fortran)

MPI_File_iwrite_shared writes data using the shared file pointer.

#include <mpi.h>
int MPI_File_iwrite_shared(MPI_File fh, void *buf, int count,
                           MPI_Datatype datatype, MPI_Request *request)
use mpi
CALL MPI_FILE_IWRITE_SHARED(fh, buf, count, datatype, request,
                            ierr)
<arbitrary> :: buf(*)
INTEGER     :: fh, count, datatype, request, ierr
fh INOUT File handle (handle).
buf IN Initial address of buffer (choice).
count IN Number of elements in buffer (integer).
datatype IN Datatype of each buffer element (handle).
request OUT Request object (handle).
ierr OUT Return code (Fortran only).



MPI_File_open (C) MPI_FILE_OPEN (Fortran)

MPI_File_open opens a file.

#include <mpi.h>
int MPI_File_open(MPI_Comm comm, char *filename, int amode,
                  MPI_Info info, MPI_File *fh)
use mpi
CALL MPI_FILE_OPEN(comm, filename, amode, info, fh, ierr)
CHARACTER*(*) :: filename
INTEGER       :: comm, amode, info, fh, ierr
comm IN Communicator (handle).
filename IN Name of file to open (string).
amode IN File access mode (integer).
info IN Info object (handle).
fh OUT New file handle (handle).
ierr OUT Return code (Fortran only).



MPI_File_preallocate (C) MPI_FILE_PREALLOCATE (Fortran)

MPI_File_preallocate ensures that storage space is allocated.

#include <mpi.h>
int MPI_File_preallocate(MPI_File *fh, MPI_Offset size)
use mpi
CALL MPI_FILE_PREALLOCATE(fh, size, ierr)
INTEGER                       :: fh, ierr
INTEGER(KIND=MPI_OFFSET_KIND) :: size
fh INOUT File handle (handle).
size IN Size to preallocate file (integer).
ierr OUT Return code (Fortran only).



MPI_File_read (C) MPI_FILE_READ (Fortran)

MPI_File_read reads a file using the individual file pointer.

#include <mpi.h>
int MPI_File_read(MPI_File fh, void *buf, int count,
                  MPI_Datatype datatype, MPI_Status *status)
use mpi
CALL MPI_FILE_READ(fh, buf, count, datatype, status, ierr)
<arbitrary> :: buf(*)
INTEGER     :: fh, count, datatype, status(MPI_STATUS_SIZE), ierr
fh INOUT File handle (handle).
buf OUT Initial address of buffer (choice).
count IN Number of elements in buffer (integer).
datatype IN Datatype of each buffer element (handle).
status OUT Status object (status).
ierr OUT Return code (Fortran only).



MPI_File_read_all (C) MPI_FILE_READ_ALL (Fortran)

MPI_File_read_all reads data collectively using the individual file pointer.

#include <mpi.h>
int MPI_File_read_all(MPI_File fh, void *buf, int count,
                      MPI_Datatype datatype, MPI_Status *status)
use mpi
CALL MPI_FILE_READ_ALL(fh, buf, count, datatype, status, ierr)
<arbitrary> :: buf(*)
INTEGER     :: fh, count, datatype, status(MPI_STATUS_SIZE), ierr
fh INOUT File handle (handle).
buf OUT Initial address of buffer (choice).
count IN Number of elements in buffer (integer).
datatype IN Datatype of each buffer element (handle).
status OUT Status object (status).
ierr OUT Return code (Fortran only).



MPI_File_read_all_begin (C) MPI_FILE_READ_ALL_BEGIN (Fortran)

MPI_File_read_all_begin begins a nonblocking collective read of all processes associated with a file handle.

#include <mpi.h>
int MPI_File_read_all_begin(MPI_File fh, void *buf, int count,
                            MPI_Datatype datatype)
use mpi
CALL MPI_FILE_READ_ALL_BEGIN(fh, buf, count, datatype, ierr)
<arbitrary> :: buf(*)
INTEGER     :: fh, count, datatype, ierr
fh INOUT File handle (handle).
buf OUT Initial address of buffer (choice).
count IN Number of elements in buffer (integer).
datatype IN Datatype of each buffer element (handle).
ierr OUT Return code (Fortran only).



MPI_File_read_all_end (C) MPI_FILE_READ_ALL_END (Fortran)

MPI_File_read_all_end completes a nonblocking collective read of all processes associated with a file handle.

#include <mpi.h>
int MPI_File_read_all_end(MPI_File fh, void *buf,
                          MPI_Status *status)
use mpi
CALL MPI_FILE_READ_ALL_END(fh, buf, status, ierr)
<arbitrary> :: buf(*)
INTEGER     :: fh, status(MPI_STATUS_SIZE), ierr
fh INOUT File handle (handle).
buf OUT Initial address of buffer (choice).
status OUT Status object (status).
ierr OUT Return code (Fortran only).



MPI_File_read_at (C) MPI_FILE_READ_AT (Fortran)

MPI_File_read_at reads a file using an explicit offset.

#include <mpi.h>
int MPI_File_read_at(MPI_File fh, MPI_Offset offset, void *buf,
                     int count, MPI_Datatype datatype,
                     MPI_Status *status)
use mpi
CALL MPI_FILE_READ_AT(fh, offset, buf, count, datatype, status,
                      ierr)
<arbitrary> :: buf(*)
INTEGER                       :: fh, count, datatype,
                                 status(MPI_STATUS_SIZE), ierr
INTEGER(KIND=MPI_OFFSET_KIND) :: offset
fh IN File handle (handle).
offset IN File offset (integer).
buf OUT Initial address of buffer (choice).
count IN Number of elements in buffer (integer).
datatype IN Datatype of each buffer element (handle).
status OUT Status object (status).
ierr OUT Return code (Fortran only).



MPI_File_read_at_all (C) MPI_FILE_READ_AT_ALL (Fortran)

MPI_File_read_at_all reads data collectively using an explicit offset.

#include <mpi.h>
int MPI_File_read_at_all(MPI_File fh, MPI_Offset offset,
                         void *buf, int count,
                         MPI_Datatype datatype,
                         MPI_Status *status)
use mpi
CALL MPI_FILE_READ_AT_ALL(fh, offset, buf, count, datatype,
                          status, ierr)
<arbitrary> :: buf(*)
INTEGER                       :: fh, count, datatype,
                                 status(MPI_STATUS_SIZE), ierr
INTEGER(KIND=MPI_OFFSET_KIND) :: offset
fh IN File handle (handle).
offset IN File offset (integer).
buf OUT Initial address of buffer (choice).
count IN Number of elements in buffer (integer).
datatype IN Datatype of each buffer element (handle).
status OUT Status object (status).
ierr OUT Return code (Fortran only).



MPI_File_read_at_all_begin (C) MPI_FILE_READ_AT_ALL_BEGIN (Fortran)

MPI_File_read_at_all_begin begins a nonblocking collective read of all processes associated with a file handle. The read begins at an explicit offset.

#include <mpi.h>
int MPI_File_read_at_all_begin(MPI_File fh, MPI_Offset offset,
                               void *buf, int count,
                               MPI_Datatype datatype)
use mpi
CALL MPI_FILE_READ_AT_ALL_BEGIN(fh, offset, buf, count, datatype,
                                ierr)
<arbitrary>                   :: buf(*)
INTEGER                       :: fh, count, datatype, ierr
INTEGER(KIND=MPI_OFFSET_KIND) :: offset
fh IN File handle (handle).
offset IN File offset (integer).
buf OUT Initial address of buffer (choice).
count IN Number of elements in buffer (integer).
datatype IN Datatype of each buffer element (handle).
ierr OUT Return code (Fortran only).



MPI_File_read_at_all_end (C) MPI_FILE_READ_AT_ALL_END (Fortran)

MPI_File_read_at_all_end completes a nonblocking collective read started by MPI_File_read_at_all_begin.

#include <mpi.h>
int MPI_File_read_at_all_end(MPI_File fh, void *buf,
                             MPI_Status *status)
use mpi
CALL MPI_FILE_READ_AT_ALL_END(fh, buf, status, ierr)
<arbitrary> :: buf(*)
INTEGER     :: fh, status(MPI_STATUS_SIZE), ierr
fh IN File handle (handle).
buf OUT Initial address of buffer (choice).
status OUT Status object (status).
ierr OUT Return code (Fortran only).



MPI_File_read_ordered (C) MPI_FILE_READ_ORDERED (Fortran)

MPI_File_read_ordered reads data collectively in rank order using the shared file pointer.

#include <mpi.h>
int MPI_File_read_ordered(MPI_File fh, void *buf, int count,
                          MPI_Datatype datatype, MPI_Status *status)
use mpi
CALL MPI_FILE_READ_ORDERED(fh, buf, count, datatype, status,
                           ierr)
<arbitrary> :: buf(*)
INTEGER     :: fh, count, datatype, status(MPI_STATUS_SIZE), ierr
fh INOUT File handle (handle).
buf OUT Initial address of buffer (choice).
count IN Number of elements in buffer (integer).
datatype IN Datatype of each buffer element (handle).
status OUT Status object (status).
ierr OUT Return code (Fortran only).



MPI_File_read_ordered_begin (C) MPI_FILE_READ_ORDERED_BEGIN (Fortran)

MPI_File_read_ordered_begin begins a nonblocking collective ordered read of all processes associated with a file handle.

#include <mpi.h>
int MPI_File_read_ordered_begin(MPI_File fh, void *buf,
                                int count, MPI_Datatype datatype)
use mpi
CALL MPI_FILE_READ_ORDERED_BEGIN(fh, buf, count, datatype, ierr)
<arbitrary> :: buf(*)
INTEGER     :: fh, count, datatype, ierr
fh INOUT File handle (handle).
buf OUT Initial address of buffer (choice).
count IN Number of elements in buffer (integer).
datatype IN Datatype of each buffer element (handle).
ierr OUT Return code (Fortran only).



MPI_File_read_ordered_end (C) MPI_FILE_READ_ORDERED_END (Fortran)

MPI_File_read_ordered_end completes a nonblocking collective ordered read started by MPI_File_read_ordered_begin.

#include <mpi.h>
int MPI_File_read_ordered_end(MPI_File fh, void *buf,
                              MPI_Status *status)
use mpi
CALL MPI_FILE_READ_ORDERED_END(fh, buf, status, ierr)
<arbitrary> :: buf(*)
INTEGER     :: fh, status(MPI_STATUS_SIZE), ierr
fh INOUT File handle (handle).
buf OUT Initial address of buffer (choice).
status OUT Status object (status).
ierr OUT Return code (Fortran only).



MPI_File_read_shared (C) MPI_FILE_READ_SHARED (Fortran)

MPI_File_read_shared reads a file using the shared file pointer.

#include <mpi.h>
int MPI_File_read_shared(MPI_File fh, void *buf, int count,
                         MPI_Datatype datatype, MPI_Status *status)
use mpi
CALL MPI_FILE_READ_SHARED(fh, buf, count, datatype, status,
                          ierr)
<arbitrary> :: buf(*)
INTEGER     :: fh, count, datatype, status(MPI_STATUS_SIZE), ierr
fh INOUT File handle (handle).
buf OUT Initial address of buffer (choice).
count IN Number of elements in buffer (integer).
datatype IN Datatype of each buffer element (handle).
status OUT Status object (status).
ierr OUT Return code (Fortran only).



MPI_File_seek (C) MPI_FILE_SEEK (Fortran)

MPI_File_seek updates the individual file pointer.

#include <mpi.h>
int MPI_File_seek(MPI_File fh, MPI_Offset offset, int whence)
use mpi
CALL MPI_FILE_SEEK(fh, offset, whence, ierr)
INTEGER                       :: fh, whence, ierr
INTEGER(KIND=MPI_OFFSET_KIND) :: offset
fh INOUT File handle (handle).
offset IN File offset (integer).
whence IN Update mode (state).
ierr OUT Return code (Fortran only).



MPI_File_seek_shared (C) MPI_FILE_SEEK_SHARED (Fortran)

MPI_File_seek_shared updates the shared file pointer.

#include <mpi.h>
int MPI_File_seek_shared(MPI_File fh, MPI_Offset offset,
                         int whence)
use mpi
CALL MPI_FILE_SEEK_SHARED(fh, offset, whence, ierr)
INTEGER                       :: fh, whence, ierr
INTEGER(KIND=MPI_OFFSET_KIND) :: offset
fh INOUT File handle (handle).
offset IN File offset (integer).
whence IN Update mode (state).
ierr OUT Return code (Fortran only).



MPI_File_set_atomicity (C) MPI_FILE_SET_ATOMICITY (Fortran)

MPI_File_set_atomicity sets the consistency semantics.

#include <mpi.h>
int MPI_File_set_atomicity(MPI_File fh, int flag)
use mpi
CALL MPI_FILE_SET_ATOMICITY(fh, flag, ierr)
INTEGER     :: fh, ierr
LOGICAL     :: flag
fh INOUT File handle (handle).
flag IN True if atomic mode is set, false if nonatomic mode is set (logical).
ierr OUT Return code (Fortran only).



MPI_File_set_info (C) MPI_FILE_SET_INFO (Fortran)

MPI_File_set_info sets new values for the hints of a file.

#include <mpi.h>
int MPI_File_set_info(MPI_File fh, MPI_Info info)
use mpi
CALL MPI_FILE_SET_INFO(fh, info, ierr)
INTEGER     :: fh, info, ierr
fh INOUT File handle (handle).
info IN Info object (handle).
ierr OUT Return code (Fortran only).



MPI_File_set_size (C) MPI_FILE_SET_SIZE (Fortran)

MPI_File_set_size resizes a file.

#include <mpi.h>
int MPI_File_set_size(MPI_File fh, MPI_Offset size)
use mpi
CALL MPI_FILE_SET_SIZE(fh, size, ierr)
INTEGER                       :: fh, ierr
INTEGER(KIND=MPI_OFFSET_KIND) :: size
fh INOUT File handle (handle).
size IN Size to truncate or expand file (integer).
ierr OUT Return code (Fortran only).



MPI_File_set_view (C) MPI_FILE_SET_VIEW (Fortran)

MPI_File_set_view changes a process's view.

#include <mpi.h>
int MPI_File_set_view(MPI_File fh, MPI_Offset disp,
                      MPI_Datatype etype, MPI_Datatype filetype,
                      char *datarep, MPI_Info info)
use mpi
CALL MPI_FILE_SET_VIEW(fh, disp, etype, filetype, datarep,
                       info, ierr)
CHARACTER*(*)                 :: datarep
INTEGER                       :: fh, etype, filetype, info, ierr
INTEGER(KIND=MPI_OFFSET_KIND) :: disp     
fh INOUT File handle (handle).
disp IN Displacement (integer).
etype IN Elementary datatype (handle).
filetype IN Filetype (handle).
datarep IN Data representation (string).
info IN Info object (handle).
ierr OUT Return code (Fortran only).



MPI_File_sync (C) MPI_FILE_SYNC (Fortran)

MPI_File_sync transfers data to a storage device.

#include <mpi.h>
int MPI_File_sync(MPI_File fh)
use mpi
CALL MPI_FILE_SYNC(fh, ierr)
INTEGER     :: fh, ierr
fh INOUT File handle (handle).
ierr OUT Return code (Fortran only).



MPI_File_write (C) MPI_FILE_WRITE (Fortran)

MPI_File_write writes a file using the individual file pointer.

#include <mpi.h>
int MPI_File_write(MPI_File fh, void *buf, int count,
                   MPI_Datatype datatype, MPI_Status *status)
use mpi
CALL MPI_FILE_WRITE(fh, buf, count, datatype, status, ierr)
<arbitrary> :: buf(*)
INTEGER     :: fh, count, datatype, status(MPI_STATUS_SIZE), ierr
fh INOUT File handle (handle).
buf IN Initial address of buffer (choice).
count IN Number of elements in buffer (integer).
datatype IN Datatype of each buffer element (handle).
status OUT Status object (status).
ierr OUT Return code (Fortran only).



MPI_File_write_all (C) MPI_FILE_WRITE_ALL (Fortran)

MPI_File_write_all writes data collectively using the individual file pointer.

#include <mpi.h>
int MPI_File_write_all(MPI_File fh, void *buf, int count,
                       MPI_Datatype datatype, MPI_Status *status)
use mpi
CALL MPI_FILE_WRITE_ALL(fh, buf, count, datatype, status, ierr)
<arbitrary> :: buf(*)
INTEGER     :: fh, count, datatype, status(MPI_STATUS_SIZE), ierr
fh INOUT File handle (handle).
buf IN Initial address of buffer (choice).
count IN Number of elements in buffer (integer).
datatype IN Datatype of each buffer element (handle).
status OUT Status object (status).
ierr OUT Return code (Fortran only).



MPI_File_write_all_begin (C) MPI_FILE_WRITE_ALL_BEGIN (Fortran)

MPI_File_write_all_begin begins a nonblocking collective write of all processes associated with a file handle.

#include <mpi.h>
int MPI_File_write_all_begin(MPI_File fh, void *buf, int count,
                             MPI_Datatype datatype)
use mpi
CALL MPI_FILE_WRITE_ALL_BEGIN(fh, buf, count, datatype, ierr)
<arbitrary> :: buf(*)
INTEGER     :: fh, count, datatype, ierr
fh INOUT File handle (handle).
buf IN Initial address of buffer (choice).
count IN Number of elements in buffer (integer).
datatype IN Datatype of each buffer element (handle).
ierr OUT Return code (Fortran only).



MPI_File_write_all_end (C) MPI_FILE_WRITE_ALL_END (Fortran)

MPI_File_write_all_end completes a nonblocking collective write started by MPI_File_write_all_begin.

#include <mpi.h>
int MPI_File_write_all_end(MPI_File fh, void *buf,
                           MPI_Status *status)
use mpi
CALL MPI_FILE_WRITE_ALL_END(fh, buf, status, ierr)
<arbitrary> :: buf(*)
INTEGER     :: fh, status(MPI_STATUS_SIZE), ierr
fh INOUT File handle (handle).
buf IN Initial address of buffer (choice).
status OUT Status object (status).
ierr OUT Return code (Fortran only).



MPI_File_write_at (C) MPI_FILE_WRITE_AT (Fortran)

MPI_File_write_at writes a file using an explicit offset.

#include <mpi.h>
int MPI_File_write_at(MPI_File fh, MPI_Offset offset, void *buf,
                      int count, MPI_Datatype datatype,
                      MPI_Status *status)
use mpi
CALL MPI_FILE_WRITE_AT(fh, offset, buf, count, datatype, status,
                       ierr)
<arbitrary> :: buf(*)
INTEGER                       :: fh, count, datatype,
                                 status(MPI_STATUS_SIZE), ierr
INTEGER(KIND=MPI_OFFSET_KIND) :: offset
fh INOUT File handle (handle).
offset IN File offset (integer).
buf IN Initial address of buffer (choice).
count IN Number of elements in buffer (integer).
datatype IN Datatype of each buffer element (handle).
status OUT Status object (status).
ierr OUT Return code (Fortran only).



MPI_File_write_at_all (C) MPI_FILE_WRITE_AT_ALL (Fortran)

MPI_File_write_at_all writes data collectively using an explicit offset.

#include <mpi.h>
int MPI_File_write_at_all(MPI_File fh, MPI_Offset offset,
                          void *buf, int count,
                          MPI_Datatype datatype,
                          MPI_Status *status)
use mpi
CALL MPI_FILE_WRITE_AT_ALL(fh, offset, buf, count, datatype,
                           status, ierr)
<arbitrary>                   :: buf(*)
INTEGER                       :: fh, count, datatype,
                                 status(MPI_STATUS_SIZE), ierr
INTEGER(KIND=MPI_OFFSET_KIND) :: offset
fh INOUT File handle (handle).
offset IN File offset (integer).
buf IN Initial address of buffer (choice).
count IN Number of elements in buffer (integer).
datatype IN Datatype of each buffer element (handle).
status OUT Status object (status).
ierr OUT Return code (Fortran only).



MPI_File_write_at_all_begin (C) MPI_FILE_WRITE_AT_ALL_BEGIN (Fortran)

MPI_File_write_at_all_begin begins a nonblocking collective write of all processes associated with a file handle. The write begins at an explicit offset.

#include <mpi.h>
int MPI_File_write_at_all_begin(MPI_File fh, MPI_Offset offset,
                                void *buf, int count,
                                MPI_Datatype datatype)
use mpi
CALL MPI_FILE_WRITE_AT_ALL_BEGIN(fh, offset, buf, count,
                                 datatype, ierr)
<arbitrary>                   :: buf(*)
INTEGER                       :: fh, count, datatype, ierr
INTEGER(KIND=MPI_OFFSET_KIND) :: offset
fh INOUT File handle (handle).
offset IN File offset (integer).
buf IN Initial address of buffer (choice).
count IN Number of elements in buffer (integer).
datatype IN Datatype of each buffer element (handle).
ierr OUT Return code (Fortran only).



MPI_File_write_at_all_end (C) MPI_FILE_WRITE_AT_ALL_END (Fortran)

MPI_File_write_at_all_end completes a nonblocking collective write started by MPI_File_write_at_all_begin.

#include <mpi.h>
int MPI_File_write_at_all_end(MPI_File fh, void *buf,
                              MPI_Status *status)
use mpi
CALL MPI_FILE_WRITE_AT_ALL_END(fh, buf, status, ierr)
<arbitrary> :: buf(*)
INTEGER     :: fh, status(MPI_STATUS_SIZE), ierr
fh INOUT File handle (handle).
buf IN Initial address of buffer (choice).
status OUT Status object (status).
ierr OUT Return code (Fortran only).



MPI_File_write_ordered (C) MPI_FILE_WRITE_ORDERED (Fortran)

MPI_File_write_ordered writes data collectively in rank order using the shared file pointer.

#include <mpi.h>
int MPI_File_write_ordered(MPI_File fh, void *buf, int count,
                           MPI_Datatype datatype, MPI_Status *status)
use mpi
CALL MPI_FILE_WRITE_ORDERED(fh, buf, count, datatype, status,
                            ierr)
<arbitrary> :: buf(*)
INTEGER     :: fh, count, datatype, status(MPI_STATUS_SIZE), ierr
fh INOUT File handle (handle).
buf IN Initial address of buffer (choice).
count IN Number of elements in buffer (integer).
datatype IN Datatype of each buffer element (handle).
status OUT Status object (status).
ierr OUT Return code (Fortran only).



MPI_File_write_ordered_begin (C) MPI_FILE_WRITE_ORDERED_BEGIN (Fortran)

MPI_File_write_ordered_begin begins a nonblocking collective ordered write of all processes associated with a file handle.

#include <mpi.h>
int MPI_File_write_ordered_begin(MPI_File fh, void *buf,
                                 int count, MPI_Datatype datatype)
use mpi
CALL MPI_FILE_WRITE_ORDERED_BEGIN(fh, buf, count, datatype, ierr)
<arbitrary> :: buf(*)
INTEGER     :: fh, count, datatype, ierr
fh INOUT File handle (handle).
buf IN Initial address of buffer (choice).
count IN Number of elements in buffer (integer).
datatype IN Datatype of each buffer element (handle).
ierr OUT Return code (Fortran only).



MPI_File_write_ordered_end (C) MPI_FILE_WRITE_ORDERED_END (Fortran)

MPI_File_write_ordered_end completes a nonblocking collective ordered write started by MPI_File_write_ordered_begin.

#include <mpi.h>
int MPI_File_write_ordered_end(MPI_File fh, void *buf,
                               MPI_Status *status)
use mpi
CALL MPI_FILE_WRITE_ORDERED_END(fh, buf, status, ierr)
<arbitrary> :: buf(*)
INTEGER     :: fh, status(MPI_STATUS_SIZE), ierr
fh INOUT File handle (handle).
buf IN Initial address of buffer (choice).
status OUT Status object (status).
ierr OUT Return code (Fortran only).



MPI_File_write_shared (C) MPI_FILE_WRITE_SHARED (Fortran)

MPI_File_write_shared writes a file using the shared file pointer.

#include <mpi.h>
int MPI_File_write_shared(MPI_File fh, void *buf, int count,
                          MPI_Datatype datatype, MPI_Status *status)
use mpi
CALL MPI_FILE_WRITE_SHARED(fh, buf, count, datatype, status,
                           ierr)
<arbitrary> :: buf(*)
INTEGER     :: fh, count, datatype, status(MPI_STATUS_SIZE), ierr
fh INOUT File handle (handle).
buf IN Initial address of buffer (choice).
count IN Number of elements in buffer (integer).
datatype IN Datatype of each buffer element (handle).
status OUT Status object (status).
ierr OUT Return code (Fortran only).



MPI_Register_datarep (C) MPI_REGISTER_DATAREP (Fortran)

MPI_Register_datarep registers data conversion functions.

#include <mpi.h>
int MPI_Register_datarep(char *datarep,
                         MPI_Datarep_conversion_function *read_conversion_fn,
                         MPI_Datarep_conversion_function *write_conversion_fn,
                         MPI_Datarep_extent_function *dtype_file_extent_fn,
                         void *extra_state)
use mpi
CALL MPI_REGISTER_DATAREP(datarep, read_conversion_fn,
                          write_conversion_fn, dtype_file_extent_fn,
                          extra_state, ierr)
CHARACTER*(*)                  :: datarep
EXTERNAL                       :: read_conversion_fn, write_conversion_fn,
                                  dtype_file_extent_fn
INTEGER(KIND=MPI_ADDRESS_KIND) :: extra_state
INTEGER                        :: ierr
datarep IN Data representation identifier (string).
read_conversion_fn IN Function invoked to convert from file representation to native representation (function).
write_conversion_fn IN Function invoked to convert from native representation to file representation (function).
dtype_file_extent_fn IN Function invoked to return the extent of a datatype as represented in the file (function).
extra_state IN Extra state.
ierr OUT Return code (Fortran only).



MPI_File_iread_at_all (C) MPI_FILE_IREAD_AT_ALL (Fortran)

A nonblocking version of MPI_FILE_READ_AT_ALL

#include <mpi.h>
int MPI_File_iread_at_all(MPI_File fh, MPI_Offset offset,
                         void *buf, int count,
                         MPI_Datatype datatype,
                         MPI_Request *request)
use mpi
CALL MPI_FILE_IREAD_AT_ALL(fh, offset, buf, count, datatype,
                          request, ierr)
<arbitrary> :: buf(*)
INTEGER                       :: fh, count, datatype,
                                 request, ierr
INTEGER(KIND=MPI_OFFSET_KIND) :: offset
fh IN File handle (handle).
offset IN File offset (integer).
buf OUT Initial address of buffer (choice).
count IN Number of elements in buffer (integer).
datatype IN Datatype of each buffer element (handle).
request OUT Request object (handle).
ierr OUT Return code (Fortran only).



MPI_File_iwrite_at_all (C) MPI_FILE_IWRITE_AT_ALL (Fortran)

A nonblocking version of MPI_FILE_WRITE_AT_ALL

#include <mpi.h>
int MPI_File_iwrite_at_all(MPI_File fh, MPI_Offset offset,
                          void *buf, int count,
                          MPI_Datatype datatype,
                          MPI_Request *request)
use mpi
CALL MPI_FILE_IWRITE_AT_ALL(fh, offset, buf, count, datatype,
                           request, ierr)
<arbitrary>                   :: buf(*)
INTEGER                       :: fh, count, datatype,
                                 request, ierr
INTEGER(KIND=MPI_OFFSET_KIND) :: offset
fh INOUT File handle (handle).
offset IN File offset (integer).
buf IN Initial address of buffer (choice).
count IN Number of elements in buffer (integer).
datatype IN Datatype of each buffer element (handle).
request OUT Request object (handle).
ierr OUT Return code (Fortran only).



MPI_File_iread_all (C) MPI_FILE_IREAD_ALL (Fortran)

A nonblocking version of MPI_FILE_READ_ALL

#include <mpi.h>
int MPI_File_iread_all(MPI_File fh, void *buf, int count,
                      MPI_Datatype datatype, MPI_Request *request)
use mpi
CALL MPI_FILE_IREAD_ALL(fh, buf, count, datatype, request, ierr)
<arbitrary> :: buf(*)
INTEGER     :: fh, count, datatype, request, ierr
fh INOUT File handle (handle).
buf OUT Initial address of buffer (choice).
count IN Number of elements in buffer (integer).
datatype IN Datatype of each buffer element (handle).
request OUT Request object (handle).
ierr OUT Return code (Fortran only).



MPI_File_iwrite_all (C) MPI_FILE_IWRITE_ALL (Fortran)

A nonblocking version of MPI_FILE_WRITE_ALL

#include <mpi.h>
int MPI_File_iwrite_all(MPI_File fh, void *buf, int count,
                       MPI_Datatype datatype, MPI_Request *request)
use mpi
CALL MPI_FILE_IWRITE_ALL(fh, buf, count, datatype, request, ierr)
<arbitrary> :: buf(*)
INTEGER     :: fh, count, datatype, request, ierr
fh INOUT File handle (handle).
buf IN Initial address of buffer (choice).
count IN Number of elements in buffer (integer).
datatype IN Datatype of each buffer element (handle).
request OUT Request object (handle).
ierr OUT Return code (Fortran only).


4.12   Language Bindings

MPI_SIZEOF Returns a size-specific MPI datatype for any intrinsic Fortran type
MPI_Type_create_f90_complex/MPI_TYPE_CREATE_F90_COMPLEX Returns a predefined MPI datatype that matches a COMPLEX variable of KIND selected_real_kind(p, r)
MPI_Type_create_f90_integer/MPI_TYPE_CREATE_F90_INTEGER Returns a predefined MPI datatype that matches an INTEGER variable of KIND selected_int_kind(r)
MPI_Type_create_f90_real/MPI_TYPE_CREATE_F90_REAL Returns a predefined MPI datatype that matches a REAL variable of KIND selected_real_kind(p, r)
MPI_Type_match_size/MPI_TYPE_MATCH_SIZE Returns a MPI datatype matching a local variable of type (typeclass, size)



MPI_SIZEOF (Fortran)

MPI_SIZEOF returns a size-specific MPI datatype for any intrinsic Fortran type.

use mpi
CALL MPI_SIZEOF(x, size, ierr)
<arbitrary> :: x
INTEGER     :: size, ierr
x IN Numeric intrinsic Fortran type (choice).
size OUT Size of machine representation of x in bytes (integer).
ierr OUT Return code (Fortran only).



MPI_Type_create_f90_complex (C) MPI_TYPE_CREATE_F90_COMPLEX (Fortran)

MPI_Type_create_f90_complex returns a predefined MPI datatype that matches a COMPLEX variable of KIND selected_real_kind(p, r).

#include <mpi.h>
int MPI_Type_create_f90_complex(int p, int r,
                                MPI_Datatype *newtype)
use mpi
CALL MPI_TYPE_CREATE_F90_COMPLEX(p, r, newtype, ierr)
INTEGER     :: p, r, newtype, ierr
p IN Precision in decimal digits (integer).
r IN Decimal exponent range (integer).
newtype OUT Requested MPI datatype (handle).
ierr OUT Return code (Fortran only).



MPI_Type_create_f90_integer (C) MPI_TYPE_CREATE_F90_INTEGER (Fortran)

MPI_Type_create_f90_integer returns a predefined MPI datatype that matches an INTEGER variable of KIND selected_int_kind(r).

#include <mpi.h>
int MPI_Type_create_f90_integer(int r, MPI_Datatype *newtype)
use mpi
CALL MPI_TYPE_CREATE_F90_INTEGER(r, newtype, ierr)
INTEGER     :: r, newtype, ierr
r IN Decimal exponent range (integer).
newtype OUT Requested MPI datatype (handle).
ierr OUT Return code (Fortran only).



MPI_Type_create_f90_real (C) MPI_TYPE_CREATE_F90_REAL (Fortran)

MPI_Type_create_f90_real returns a predefined MPI datatype that matches a REAL variable of KIND selected_real_kind(p, r).

#include <mpi.h>
int MPI_Type_create_f90_real(int p, int r, MPI_Datatype *newtype)
use mpi
CALL MPI_TYPE_CREATE_F90_REAL(p, r, newtype, ierr)
INTEGER     :: p, r, newtype, ierr
p IN Precision in decimal digits (integer).
r IN Decimal exponent range (integer).
newtype OUT Requested MPI datatype (handle).
ierr OUT Return code (Fortran only).



MPI_Type_match_size (C) MPI_TYPE_MATCH_SIZE (Fortran)

MPI_Type_match_size returns a MPI datatype matching a local variable of type (typeclass, size).

#include <mpi.h>
int MPI_Type_match_size(int typeclass, int size,
                        MPI_Datatype *type)
use mpi
CALL MPI_TYPE_MATCH_SIZE(typeclass, size, type, ierr)
INTEGER     :: typeclass, size, type, ierr
typeclass IN Generic type specifier (integer).
size IN Size of representation in bytes (integer).
type OUT Datatype with correct type, size (handle).
ierr OUT Return code (Fortran only).


4.13   Profiling Interface

MPI_Pcontrol/MPI_PCONTROL controls profiling



MPI_Pcontrol (C) MPI_PCONTROL (Fortran)

MPI_Pcontrol controls profiling.

#include <mpi.h>
int MPI_Pcontrol(int level, ...)
use mpi
CALL MPI_PCONTROL(level)
INTEGER     :: level
level IN Profiling level.


4.14   Deprecated Procedures

MPI_Attr_delete/MPI_ATTR_DELETE Deletes an attribute from a communicator
MPI_Attr_get/MPI_ATTR_GET Gets a communicator attribute
MPI_Attr_put/MPI_ATTR_PUT Assigns an attribute for a communicator
MPI_Keyval_create/MPI_KEYVAL_CREATE Generates an attribute key
MPI_Keyval_free/MPI_KEYVAL_FREE Frees an attribute key



MPI_Attr_delete (C) MPI_ATTR_DELETE (Fortran)

MPI_Attr_delete deletes an attribute value associated with a key.

#include <mpi.h>
int MPI_Attr_delete(MPI_Comm comm, int keyval)
use mpi
CALL MPI_ATTR_DELETE(comm, keyval, ierr)
INTEGER     :: comm, keyval, ierr
comm IN Communicator to which attribute is attached (handle).
keyval IN Key value of deleted attribute (integer).
ierr OUT Return code (Fortran only).



MPI_Attr_get (C) MPI_ATTR_GET (Fortran)

MPI_Attr_get returns an attribute value associated with a key.

#include <mpi.h>
int MPI_Attr_get(MPI_Comm comm, int keyval, void *attr_value,
                 int *flag)
use mpi
CALL MPI_ATTR_GET(comm, keyval, attr_value, flag, ierr)
LOGICAL     :: flag
INTEGER     :: comm, keyval, attr_value, ierr
comm IN Communicator to which attribute is attached (handle).
keyval IN Key value (integer).
attr_value OUT Attribute value, unless flag is false.
flag OUT True if an attribute value is extracted; false if no attribute is associated with the key.
ierr OUT Return code (Fortran only).



MPI_Attr_put (C) MPI_ATTR_PUT (Fortran)

MPI_Attr_put stores an attribute value associated with a key.

#include <mpi.h>
int MPI_Attr_put(MPI_Comm comm, int keyval, void *attr_value)
use mpi
CALL MPI_ATTR_PUT(comm, keyval, attr_value, ierr)
INTEGER     :: comm, keyval, attr_valu, ieerr
comm IN Communicator to which attribute is attached (handle).
keyval IN Key value, as returned by MPI_KEYVAL_CREATE (integer).
attr_value IN Attribute value.
ierr OUT Return code (Fortran only).



MPI_Keyval_create (C) MPI_KEYVAL_CREATE (Fortran)

MPI_Keyval_create generates an attribute key.

#include <mpi.h>
int MPI_Keyval_create(MPI_Copy_function *copy_fn, MPI_Delete_function *delete_fn, 
                      int *keyval, void *extra_state)  

typedef int MPI_Copy_function(MPI_Comm oldcomm, int keyval, void *extra_state,
                              void *attribute_val_in, void *attribute_val_out, 
                              int *flag)
use mpi
CALL MPI_KEYVAL_CREATE(copy_fn, delete_fn, keyval, extra_state, ierr)
EXTERNAL    :: copy_fn, delete_fn
INTEGER     :: keyval, extra_state, ierr

SUBROUTINE copy_function(oldcomm, keyval, extra_state, attribute_val_in,
                         attribute_val_out, flag, ierr)
INTEGER     :: oldcomm, keyval, extra_state, attribute_val_in,
               attribute_val_out, ierr
LOGICAL     :: flag
copy_fn IN Copy function called when generating a duplicate of a communicator.
delete_fn IN Function called to destroy a communicator.
keyval OUT Attribute key.
extra_state IN Auxiliary argument for copy_fn and delete_fn.
ierr OUT Return code (Fortran only).



MPI_Keyval_free (C) MPI_KEYVAL_FREE (Fortran)

MPI_Keyval_free frees an attribute key.

#include <mpi.h>
int MPI_Keyval_free(int *keyval)
use mpi
CALL MPI_KEYVAL_FREE(keyval, ierr)
INTEGER     :: keyval, ierr
keyval INOUT Attribute key being freed.
ierr OUT Return code (Fortran only).


Contents Previous Chapter Next Chapter Glossary Index