Skip site navigation (1)Skip section navigation (2)

FreeBSD Manual Pages

  
 
  

home | help
MPI_Win_allocate_shared(3)	      MPI	    MPI_Win_allocate_shared(3)

NAME
       MPI_Win_allocate_shared	-   Create  an MPI Window object for one-sided
       communication and shared	memory access, and  allocate  memory  at  each
       process

SYNOPSIS
       int MPI_Win_allocate_shared(MPI_Aint size, int disp_unit, MPI_Info info,
       MPI_Comm	comm, void *baseptr, MPI_Win *win)

       int MPI_Win_allocate_shared_c(MPI_Aint size, MPI_Aint disp_unit,	MPI_Info info,
       MPI_Comm	comm, void *baseptr, MPI_Win *win)

INPUT PARAMETERS
       size   -	size of	local window in	bytes (non-negative integer)
       disp_unit
	      -	local unit size	for displacements, in bytes (positive integer)
       info   -	info argument (handle)
       comm   -	intra-communicator (handle)

OUTPUT PARAMETERS
       baseptr
	      -	address	of local allocated window segment (choice)
       win    -	window object returned by the call (handle)

	      This is a	collective call	executed by all	processes in the group
	      of comm. On each process i, it allocates memory of at least size
	      bytes  that is shared among all processes	in comm, and returns a
	      pointer to the locally allocated segment in baseptr that can  be
	      used for load/store accesses on the calling process. The locally
	      allocated	memory can be the target of load/store accesses	by re-
	      mote  processes;	the  base  pointers for	other processes	can be
	      queried using the	function MPI_Win_shared_query .

	      The call also returns a window object that can be	 used  by  all
	      processes	 in  comm to perform RMA operations. The size argument
	      may be different at each process and size	= 0 is	valid.	It  is
	      the  user's  responsibility to ensure that the communicator comm
	      represents a group of processes that can create a	shared	memory
	      segment  that can	be accessed by all processes in	the group. The
	      allocated	memory is contiguous across process ranks  unless  the
	      info  key	alloc_shared_noncontig is specified. Contiguous	across
	      process ranks means that the first address in the	memory segment
	      of process i is consecutive with the last	address	in the	memory
	      segment  of process i - 1. This may enable the user to calculate
	      remote address offsets with local	information only.

THREAD AND INTERRUPT SAFETY
       This routine is thread-safe.  This  means  that	this  routine  may  be
       safely  used by multiple	threads	without	the need for any user-provided
       thread locks.  However, the routine is not interrupt safe.   Typically,
       this  is	due to the use of memory allocation routines such as malloc or
       other non-MPICH runtime routines	that  are  themselves  not  interrupt-
       safe.

NOTES FOR FORTRAN
       All  MPI	routines in Fortran (except for	MPI_WTIME and MPI_WTICK	) have
       an additional argument ierr at the end of the argument list.   ierr  is
       an  integer and has the same meaning as the return value	of the routine
       in C.  In Fortran, MPI routines are subroutines,	and are	 invoked  with
       the call	statement.

       All MPI objects (e.g., MPI_Datatype , MPI_Comm )	are of type INTEGER in
       Fortran.

ERRORS
       All  MPI	 routines  (except  MPI_Wtime  and MPI_Wtick ) return an error
       value; C	routines as the	value of the function and Fortran routines  in
       the last	argument.  Before the value is returned, the current MPI error
       handler	is called.  By default,	this error handler aborts the MPI job.
       The error handler may be	changed	with MPI_Comm_set_errhandler (for com-
       municators), MPI_File_set_errhandler (for files),  and  MPI_Win_set_er-
       rhandler	 (for  RMA windows).  The MPI-1	routine	MPI_Errhandler_set may
       be used but its	use  is	 deprecated.   The  predefined	error  handler
       MPI_ERRORS_RETURN  may  be  used	 to cause error	values to be returned.
       Note that MPI does not guarantee	that an	MPI program can	continue  past
       an  error;  however, MPI	implementations	will attempt to	continue when-
       ever possible.

       MPI_SUCCESS
	      -	No error; MPI routine completed	successfully.
       MPI_ERR_ARG
	      -	Invalid	argument.  Some	argument is invalid and	is not identi-
	      fied by a	specific error class (e.g., MPI_ERR_RANK ).
       MPI_ERR_COMM
	      -	Invalid	communicator.  A common	error is to use	a null	commu-
	      nicator in a call	(not even allowed in MPI_Comm_rank ).
       MPI_ERR_DISP
	      -
       MPI_ERR_INFO
	      -	Invalid	Info
       MPI_ERR_SIZE
	      -
       MPI_ERR_OTHER
	      -	 Other	error;	use  MPI_Error_string  to get more information
	      about this error code.

SEE ALSO
       MPI_Win_allocate	 MPI_Win_create	 MPI_Win_create_dynamic	  MPI_Win_free
       MPI_Win_shared_query

				   2/3/2025	    MPI_Win_allocate_shared(3)

Want to link to this manual page? Use this URL:
<https://man.freebsd.org/cgi/man.cgi?query=MPI_Win_allocate_shared&sektion=3&manpath=FreeBSD+Ports+14.3.quarterly>

home | help