Skip site navigation (1)Skip section navigation (2)

FreeBSD Manual Pages

  
 
  

home | help
EMILUA(7)		       Emilua reference			     EMILUA(7)

NAME
       Emilua -	Lua execution engine

RATIONALE
       Perhaps Lua's best-known	feature	is its portability. Its	reference
       implementation from PUC-Rio is written in plain ANSI C and it's very
       easy to embed in	any larger program.

       However limiting	Lua to ANSI C has a high toll attached.	Any useful
       program interacts with the external world (i.e. it must perform IO
       operations), and	approaching portability	by limiting oneself to ANSI C
       has consequences:

          Many	useful IO operations don't belong to ANSI C's scope (you can't
	   even	perform	socket operations).

          Not every operation will use	the most efficient approach for	the
	   underlying system.

          There aren't	even APIs to create threads, nor to multiplex IO
	   requests in the same	thread,	so at most you can handle half-duplex
	   protocols.

       Another approach	to portability -- the one chosen by Emilua -- is to
       have a different	implementation for every OS. So	your Lua program can
       make use	of portable interfaces that require different underlying
       implementations.	That also seems	to be the approach taken by
       luapower[1].

       Furthermore, if efficient operations exist to deal with patterns
       specific	to some	OSes, they are available when your Lua program runs in
       them (as	long as	they don't conflict with the proactor model[2]). For
       instance, you can make use of TransmitFile() when your program runs in
       Windows.	It's expected that more	of these interfaces will appear	in
       future Emilua releases.

HELLO WORLD
	   print("Hello	World")

       Or, using the streams API:

	   local system	= require "system"
	   local stream	= require "stream"

	   stream.write_all(system.out,	"Hello World\n")

       Emilua doesn't expose native handles (e.g. file descriptors, or Windows
       HANDLE objects) for the underlying system directly. Instead they're
       wrapped into IO objects that expose a portable &	safe interface (they'd
       also be type-safe in statically typed languages). You can't accept
       connections on a	pipe handle, and Emilua	doesn't	worry about such
       impossible use cases.

	   Note

	   Many	of the interfaces used in Emilua are inspired by Douglas C.
	   Schmidt's work in Pattern-Oriented Software Architecture.

       The standard stream handles -- stdin, stdout, and stderr	-- are
       available in the	module "system". They model the	interface for streams.
       The module "stream" contains useful functions to	manipulate these
       objects.

	   Tip

	   Many	other types modeling streams exist in Emilua such as files,
	   pipes, serial ports,	TCP and	TLS connections.

       A stream	can be further broken down into	read streams and write
       streams.	system.out models a write stream. Write	streams	contain	the
       following method:

       write_some(self,	buffer:	byte_span) -> integer
	   Writes buffer into the stream and returns the number	of bytes
	   written.

	   On errors, an exception containing the error	code generated by the
	   OS is raised.

       Writes are not atomic (unless guaranteed	by the underlying system under
       certain scenarios). To portably write the whole buffer into the stream,
       we must keep calling write_some() until the buffer is fully drained
       (Emilua won't automatically and inappropriately buffer data behind your
       back). That's what stream.write_all() does. Another boilerplate taken
       care of by stream.write_all() is	creating a network buffer out of a
       string object.

ASYNC IO
       In truly	async IO APIs, the network buffer must stay alive until	the
       operation completes. So -- for network buffers -- Emilua	uses a type
       independent of the Lua VM lifetime. If you call system.exit() to	kill
       the calling VM, the network buffers participating in outstanding	IO
       operations will stay alive until	the respective operations finish (but
       killing the VM will also	send a signal to cancel	such associated
       outstanding IO operations).

	   Tip

	   byte_span is	modeled	after Golang slices, but many more algorithms
	   (mostly string-related) are available as well.

       The initiating function (such as	read_some()) signals to	the operating
       system that it should start an asynchronous operation, but the
       operation itself	hardly involves	the CPU	at all.	So if there's nothing
       else to execute,	the CPU	would idle until notified of external events.
       Keeping the CPU spinning	will not make the IO happen faster. Making
       more CPU	cores spin won't make the IO operation run faster. Once	the
       request is sent to the kernel (and then further forwarded to the
       controller), the	CPU is free to perform other tasks.

       That's what async IO means. The IO operation happens asynchronously to
       the program execution. However signaling	that the IO operation has
       completed (the IO completion event) doesn't need	to be asynchronous.

	     Delay not,	Caesar.	Read it	instantly.
	       Julius Caesar, 3, I -- Shakespeare

	     Here is a letter, read it at your leisure.
	       Merchant	of Venice, 5, I	-- Shakespeare
	    -- Quoted in "VMS Internals	and Data Structures", V4.4,
	    when referring to I/O system services

       There is	a lot more to this topic. However, for the Lua
       programmer, the topic ends here (pretty boring, huh?).

CONCURRENT IO
       The initiating function blocks the current fiber	until the
       operation finishes. However, as we saw earlier, this would be
       the perfect moment to perform other tasks and schedule more IO
       operations.

       A trend we see in modern	times is that of lazy frameworks to
       solve the async IO problem first	and foremost. Only then	when
       their authors stumble on	the problem of concurrent
       programming[3] they're forced to	do something about it, and they
       keep ignoring it	by offering lame ad-hoc	tooling	around it[4].
       Emilua is different. The	first versions of Emilua were all
       focused on offering a solid execution engine for	concurrent
       programming. And	once this foundation was solid,	a new version
       was released with plenty	of IO operations integrated.

       Emilua -- as the	execution engine -- will schedule fibers and
       actors in a cooperative multitasking fashion. Once the
       initiating function forwards the	request	to the kernel, Emilua
       will choose the next ready task to run and schedule it (be it a
       fiber, be it an actor).

	   Note

	   Emilua is focused on	scalability and	throughput. A solution
	   for latency-oriented	problems could be offered as well, but
	   as of this writing it doesn't exist.

       So, if you want to perform background tasks while the IO
       operation is in progress, just schedule a new task before you
       call the	initiating function.

   Spawning new	fibers
       Just call spawn() passing the start function and	a new fiber
       will be scheduled for near execution.

	   local system	= require "system"
	   local stream	= require "stream"
	   local sleep = require "time".sleep

	   spawn(function()
	       -- WARNING: Please, do not ever use timers to synchronize
	       -- tasks	in your	programs. This is just an example.
	       sleep(1)

	       stream.write_all(system.out, " World\n")
	   end):detach()

	   stream.write_all(system.out,	"Hello")

   Spawning new	actors
       Just call spawn_vm() passing the	start module and a new Lua VM
       will be created and scheduled for near execution.

	   local system	= require "system"
	   local stream	= require "stream"

	   if _CONTEXT == 'main' then
	       spawn_vm('.')
	       stream.write_all(system.out, "Hello")
	   else	assert(_CONTEXT	== 'worker')
	       require "time".sleep(1)
	       stream.write_all(system.out, " World\n")
	   end

   Choosing between fibers and actors
       Fibers share memory, and	failing	to handle errors in certain
       well-defined scenarios will bring down the whole	Lua VM.	If you
       need a slightly higher degree of	protection against dirty code,
       spawn actors.

       Lua VMs represent actors	in Emilua. Different actors share no
       memory. That has	an associated cost, and	it's also inconvenient
       for certain common patterns. If you aren't certain which	model
       to choose, go with fibers.

       If you saturated	your single-core performance already, an easy
       way to extract more performance of the underlying system	is most
       likely to spawn new threads. Lua	isn't a	thread-safe language,
       so you need to spawn more Lua VMs (i.e. actors),	and a few
       threads as well.

       You can also mix	both approaches.

HELLO SLEEPSORT
       One really useful algorithm to quickly showcase a good deal of
       design for execution engines is sleepsort. In a nutshell,
       sleepsort sorts numbers by waiting N units of time before
       printing	N, and this process is executed	concurrently for each
       item in the list.

	   local sleep = require('time').sleep

	   local numbers = {8, 42, 38, 111, 2, 39, 1}

	   for _, n in pairs(numbers) do
	       spawn(function()
		   sleep(n / 100)
		   print(n)
	       end)
	   end

       The above program will print the	numbers	in sorted order.

CANCELLABLE OPERATIONS
       IO operations might never complete, so serious execution	engines
       will expose some	way to cancel them. There's a huge tutorial
       just on this topic and you're encouraged	to read	it:
       emilua-cancellation(7).

       Adding a	timeout	argument for each operation is a lame way to
       solve this problem[5], and Emilua wants no part in this trend.
       However,	if that's how you really want to solve your problems,
       here's one way to do it:

	   local sleep = require('time').sleep

	   function op_with_timeout(op,	timeout)
	       local f_op = spawn(op)
	       local f_timer = spawn(function()
		   sleep(timeout)
		   f_op:cancel()
	       end)

	       local ret = {f_op:join()}
	       f_timer:cancel()
	       return unpack(ret)
	   end

	   -- USAGE EXAMPLE

	   local ip = require 'ip'

	   local acceptor = ip.tcp.acceptor.new()
	   acceptor:open('v4')
	   acceptor:set_option('reuse_address',	true)
	   if not pcall(function() acceptor:bind(ip.address.loopback_v4(), 8080) end) then
	       acceptor:bind(ip.address.loopback_v4(), 0)
	   end
	   print('Listening on ' .. ip.tostring(acceptor.local_address,	acceptor.local_port))
	   acceptor:listen()

	   local sock =	op_with_timeout(function() return acceptor:accept() end, 5000)
	   print(getmetatable(sock))

FINAL NOTES
       That's the gist of using	Emilua.	The interfaces mimic their
       counterpart in the non-async world, and it's usually obvious
       what the	program	is doing even when there's a huge theoretical
       background behind it all.

       We try to follow	the principle of no-surprises. One operation in
       Emilua is roughly equivalent to one syscall in the underlying
       OS, and we just pass the	original error (if any)	unmodified for
       the caller to handle instead of trying to do anything funny on
       the user's back.

       If you don't need multitasking support, the program you write in
       Emilua won't look much different	from a program written for an
       abstraction layer that just exposes small shims over the	real
       syscalls. If you	can write programs for blocking	APIs, you can
       write programs for Emilua.

       When you	do need	multitasking, Emilua is	perhaps	the most
       flexible	solution for Lua programs. However, why	is that	so --
       how to make good	use of all the tools, and what it's really
       being offered beyond the	trivial	-- will	be a topic of other
       tutorials.

       Many of the topics barely scratched above could be further
       expanded	into tutorials of their	own. Browse the	documentation
       pages to	see what topics	catch your attention.

NOTES
       [1]    https://luapower.com/

       [2]    The  exception  to  this	rule are filesystem operations.
	      Filesystem operations are	available in Emilua  regardless
	      of whether the underlying	system offers them as part of a
	      proactor.

       [3]    Managing	state,	event  notifications, wasteful pooling,
	      forward progress,	fairness, ...

       [4]    Exceptions to this trend include Java's LOOM, Erlang, and
	      Golang.

       [5]    Latency-oriented	frameworks  are	 not   part   of   this
	      criticism. They have a good excuse for it.

Emilua 0.11.2			  2025-03-12			     EMILUA(7)

Want to link to this manual page? Use this URL:
<https://man.freebsd.org/cgi/man.cgi?query=emilua-getting-started&sektion=7&manpath=FreeBSD+Ports+14.3.quarterly>

home | help