| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
| |
This assert() causes ipcpd and subsequent irmd abort() when shutting
down debug builds. Should be fixed some day when other components are
more robust (frct retransmissions and routing).
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
|
| |
This was beneficial on dual-XEON CPU systems, but on most hardware,
this kills performance when running multiple IPCPs on a single CPU.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
|
| |
Max value of UINT64 can be 20 characters when printed, need an extra
byte for sprintf trailing '\0'.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
| |
This fixes scaling of the congestion window, which was a buggy mess.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This moves Resource Information Base (RIB) initialization into the
ipcp_init() function, so all IPCPs initialize a RIB. The RIB not shows
some common IPCP information, such as the IPCP name, IPCP state and
the layer name if the IPCP is part of a layer.
The initialization of the hash algorithm and layer name was moved out
of the common ipcp source because IPCPs may only know this information
after enrollment. Some IPCPs were not even storing this information.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
|
| |
The broadcast IPCP was missing in the known types, causing the type to
show as UNKNOWN.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
|
| |
The tag was set to 0.18, but the version was still at 17.5 in
CMake.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
|
|
| |
The new GCC 11.1 compiler discovered s_dist would be uninitialized
with an unknown policy, so it doesn't need to be free'd. Also removes
some unneeded includes in broadcast dt.c that I had pending.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
| |
This removes a program name variable that was not used anymore.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
The connection manager and enrolment components of the unicast and
broadcast IPCP have a lot in common, as conjectured in the paper. The
initial implementation of the broadcast IPCP just duplicated the
code. This moves the shared functionality to common ground.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
This removes the raptor IPCP. The code hasn't been updated for a
while, and wouldn't compile. Raptor served its purpose as a PoC for
Ouroboros-over-Ethernet-Layer-1, but giving the extreme niche hardware
needed to run it, it's not worth maintaining this anymore.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
| |
It's not needed to include enroll.h.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The UDP layer will now use a single (configurable) UDP port, default
3435. This makes it easer to allocate flows as a client from behind a
NAT firewall without having to configure port forwarding rules. So
basically, from now on Ouroboros traffic is transported over a
bidirectional <src><port>:<dst><port> UDP tunnel. The reason for not
using/allowing different client/server ports is that it would require
reading from different sockets using select() or something similar,
but since we need the EID anyway (mgmt packets arrive on the same
server UDP port), there's not a lot of benefit in doing it. Now the
operation is similar to the ipcpd-eth, with the port somewhat
functioning as a "layer name", where in UDP, the Ethertype functions
as a "layer name".
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
|
| |
The ugent email addresses are shut down, updated to Ouroboros mail
addresses.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
| |
Happy New Year, Ouroboros!
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
|
| |
The ECN marking function should be able to use the packet QoS to allow
prioritizing traffic under congestion. Not yet implemented in MB-ECN.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
|
| |
The t_sent variable is a remnant from the first version and isn't
needed anymore.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
|
| |
The previous value of the ECN field should be passed to the congestion
notification function.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
|
| |
DH key creation was returning -ECRYPT if opennssl is not installed,
instead of success (0).
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
| |
This causes builds to fail on systems where OpenSSL is not available.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The EIDs are now 64-bit. This makes it a tad harder to guess them
(think of port scanning). The implementation has only the most
significant 32 bits random to quickly map EIDs to N+1 flows. While
this is equivalent to a random cookie as a check on flows, the
rationale is that valid endpoint IDs should be pretty hard to guess
(and thus be 64-bit random at least). Ideally one would use
content-addressable memory for this kind of mapping.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
There is a check not to rapidly double the window to astronomical
sizes when there is no congestion experienced for long periods of
time, but the if-else logic was botched and it still grew to
astronomical sizes (albeit linear instead of exponential).
I also lowered the ECN threshold a bit.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The DT will now post all packets for N+1 flows through the flow
allocator component. This means that N+1 flows can be monitored
through the flow allocator stats, and N-1 flows through the DT stats.
The DT component still keeps stats for the local components (FA and
DHT), but this can be removed once the DHT has its own RIB
output.
The flow allocator show statistics for
Sent packets: total packets that were presented for sending
on this specific flow
Send failed: packets that were unable to be sent
Received packets: total packets that were presented by the DT component
on this specific flow
Received failed: packets that were unable to be delivered
These stats are presented as both packet counts and byte counts. To
know how many were successful, the values for failed need to be
subtracted from the values for total.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
|
| |
Noticed an off-by-one in the packet counter because it was incremented
before and the byte counter after the flow update.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
The RIB will now show some stats for the flow allocator, including
congestion avoidance statistics. This is needed before decoupling the
data transfer component and the flow allocator as some current stats
show in DT will move to FA.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The mb-ecn policy has a couple of divisions in the math, which I
wanted to avoid. Now it measures the number of bytes sent in a window,
and updates the next window with AIMD logic. If the number of bytes in
the window is reached, the call blocks. To avoid long packet bursts,
the window size continually scales to contain between CA_MINPS (8) and
CA_MAXPS (64) packets.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
The dt component bypasses the flow allocator on the receiver side, and
may try to update congestion context when the flow has already been
deallocated by the receiver. I will fix this bypass and always pass
through the flow allocator sometime soon; for now, I added a check in
the flow allocator call to avoid the SEGV.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
|
| |
The enrollment procedure was not passing the policy for congestion
avoidance.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds congestion avoidance policies to the unicast IPCP. The
default policy is a multi-bit explicit congestion avoidance algorithm
based on data-center TCP congestion avoidance (DCTCP) to relay
information about the maximum queue depth that packets experienced to
the receiver. There's also a "nop" policy to disable congestion
avoidance for testing and benchmarking purposes.
The (initial) API for congestion avoidance policies is:
void * (* ctx_create)(void);
void (* ctx_destroy)(void * ctx);
These calls create / and or destroy a context for congestion control
for a specific flow. Thread-safety of the context is the
responsability of the flow allocator (operations on the ctx should be
performed under a lock).
ca_wnd_t (* ctx_update_snd)(void * ctx,
size_t len);
This is the sender call to update the context, and should be called
for every packet that is sent on the flow. The len parameter in this
API is the packet length, which allows calculating the bandwidth. It
returns an opaque union type that is used for the call to check/wait
if the congestion window is open or closed (and allowing to release
locks before waiting).
bool (* ctx_update_rcv)(void * ctx,
size_t len,
uint8_t ecn,
uint16_t * ece);
This is the call to update the flow congestion context on the receiver
side. It should be called for every received packet. It gets the ecn
value from the packet and its length, and returns the ECE (explicit
congestion experienced) value to be sent to the sender in case of
congestion. The boolean returned signals whether or not a congestion
update needs to be sent.
void (* ctx_update_ece)(void * ctx,
uint16_t ece);
This is the call for the sending side top update the context when it
receives an ECE update from the receiver.
void (* wnd_wait)(ca_wnd_t wnd);
This is a (blocking) call that waits for the congestion window to
clear. It should be stateless (to avoid waiting under locks). This may
change later on if passing the context is needed for different algorithms.
uint8_t (* calc_ecn)(int fd,
size_t len);
This is the call that intermediate IPCPs(routers) should use to update
the ECN field on passing packets.
The multi-bit ECN policy bases the value for the ECN field on the
depth of the rbuff queue packets will be sent on. I created another
call to grab the queue depth as fccntl is write-locking the
application. We can further optimize this to avoid most locking on the
rbuff.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
|
|
|
| |
The ocbr client was spinning the CPU by default, which made sense on
lab servers with dual xeons, but not so much for average users. Now
sleeping becomes the default. Busy waiting can be enabled using --spin
if needed.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
|
|
| |
The ocbr server was using non-blocking reads (probably because we
didn't have read timeouts when we wrote it) and was using a whole CPU
core per thread.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
The timerwheel is checked during IPC calls (fevent, flow_read),
causing huge load on CPU consumption in IPCPs, since they have a lot
of fevent() threads for QoS. The timerwheel will need further
optimization), but for now I reduced the default tick time to 5 ms and
added a boolean to check that the wheel is actually used.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
The flow information in the main loop is passed as a direct pointer to
an irm_flow object in the flow database. This was (probably) not
really an issue due to how the flow allocation operations work, but
the thread sanitizer was barfing a lot of (correct) data race errors
when running bigger tests, so now makes a safe copy of the data.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
|
| |
I mistakenly set the default to the (buggy) lockless rbuff
implementation instead of the pthread one in commit 3aec660e.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
|
| |
This reverts commit 978266fe4beba21292daad2d341fe5ff22e08aba.
We were incorrectly unmounting the directory under normal conditions.
Signed-off-by: Sander Vrijders <[email protected]>
Signed-off-by: Dimitri Staessens <[email protected]>
|
|
|
|
|
|
|
| |
The flow stats had quite a lot of duplication.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds the rendez-vous mechanism to handle the case where the
sending window is closed and window updates get lost. If the sending
window is closed, the sender side will send an RDVS every DELT_RDV
time (100ms), and give up after MAX_RDV time (1 second). Upon
reception of a RDVS packet, a window update is sent immediately. We
can make this much more configurable later on (build options for
defaults, fccntl for runtime tuning).
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
| |
If the sending window for flow control is closed, the sending
application will now block until the window opens. Beware that until
the rendez-vous mechanism is implemented, shutting down a server while
the client is sending (with non-timed-out blocking write) will cause
the client to hang indefinitely because its window will close.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
| |
Refactor flow_write cleanup.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
|
|
| |
This adds sending and receiving window updates for flow control. I
used the 8 pad bits as part of the window update field, so it's 24
bits, allowing for ~16 million packets in flight.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
|
|
| |
The function was returning under a cleanup handler, which is not
allowed. We don't do anything with the return value if the write
thread ends, so just stopping the thread is fine.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
|
| |
The condition variable was not initialized correctly and using the
wrong clock for pthread_cond_timedwait.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This allows configuring some parameters for FRCP at compile time, such
as default values for Delta-t and configuration of the timerwheel. The
timerwheel will now reschedule when it fails to create a packet,
instead of setting the flow down immediately. Some new things added
are options to store packets for retransmission on the heap, and using
non-blocking calls for retransmission. The defaults do not change the
current behaviour.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
|
| |
Flows should be locked when moving the timerwheel. For frcti_snd, a
rdlock is enough.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
| |
A flow_set is thread-safe and doesn't need to be protected by a lock.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
| |
Fix assignment instead of comparison operator.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
|
|
| |
There was a dealloc() call in oping server under mutex, which could
leave that mutex locked when the thread was cancelled, causing oping
to hang on exit. This avoids calling dealloc under lock.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This completes the retransmission (automated repeat-request, ARQ)
logic, sending (delayed) ACK messages when needed.
On deallocation, flows will ACK try to retransmit any remaining
unacknowledged messages (unless the FRCTFLINGER flag is turned off;
this is on by default). Applications can safely shut down as soon as
everything is ACK'd (i.e. the current Delta-t run is done). The
activity timeout is now passed to the IPCP for it to sleep before
completing deallocation (and releasing the flow_id). That should be
moved to the IRMd in due time.
The timerwheel is revised to be multi-level to reduce memory
consumption. The resolution bumps by a factor of 1 << RXMQ_BUMP (16)
and each level has RXMQ_SLOTS (1 << 8) slots. The lowest level has a
resolution of (1 << RXMQ_RES) (20) ns, which is roughly a
millisecond. Currently, 3 levels are defined, so the largest delay we
can schedule at each level is:
Level 0: 256ms
Level 1: 4s
Level 2: about a minute.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds the logic to send a pure acknowledgment packet without any
data to send. This needed the event filter for the fqueue, as these
non-data packets should not trigger application PKT events. The
default timeout is now 10ms, until we have FRCP tuning as part of
fccntl.
Karn's algorithm seems to be very unstable with low (sub-ms) RTT
estimates. Doubling RTO (every RTO) seems still too slow to prevent
rtx storms when the measured rtt suddenly spikes several orders of
magnitude. Just assuming the ACK'd packet is the last one transmitted
seems to be a lot more stable. It can lead to temporary
underestimation, but this is not a throughput-killer in FRCP.
Changes most time units to nanoseconds for faster computation.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|
|
|
|
|
|
|
|
| |
The sanitize function in the rdrbuff should only be compiled if robust
mutexes are present on the system.
Signed-off-by: Dimitri Staessens <[email protected]>
Signed-off-by: Sander Vrijders <[email protected]>
|