Port, Export & Implementation Port in UVM - uvm

What is exactly a port, export & implementation port in UVM??
I know something like port initiates the data transfer by calling a method, whose definition must be there in an implementation port connected to it. But still I don't know exact difference.
When to use put port/export/implementation port and similarly get port/export/implementation port, analysis port/export & analysis/tlm fifo?

A TLM port defines the set of methods to be used for a particular connection, while a TLM export supplies the implementation of those methods. Connecting a port to an export allows the implementation to be executed when the port method is called
Any TLM communication involves mainly of two components, the producer and consumer. The producer generates transactions and the consumer receives the transactions.
put-port allows a producer to put a transaction to consumer.
get-port allows a consumer to request a transaction from a producer.
Analysis ports are used when a producer needs to simultaneously send (broadcast) to multiple consumers. This is a non-blocking mode of communication.
TLM Fifos are used when the consumer wants to store the received transactions and process them at a later time.
More detailed and in-depth information can be found in the user guide that came with the UVM 1.0 Reference Implementation. You can download the same http://www.accellera.org/downloads/standards/uvm

Related

How does windows handling of timeout after SetCommTimeouts unfold?

How does windows handling of timeout after SetCommTimeouts unfold?
Does it do reconnect at that level or do I in the app layer?
Perhaps you are assuming a TCP/IP session, but there is no such concept in the serial port API.
A serial port is a peer-to-peer physically connected cable that allows communication if the program opens the port at both ends.
Set the timeout separately for several Read/Write factors. See the API documentation for more details.
For both Read/Write timeout values, when you call the Read/Write API, you will be notified that a Read/Write API timeout error has occurred if the specified number of bytes of data cannot be sent or received within the specified time.
Even if those errors occur, the connections between the ports are maintained and there is no concept or API to reconnect at the serial port API level.
The programmer should consider whether a program has been created that conforms to the communication setting conditions and protocol specifications of the connected device, not to close/re-open when there is an error, for example.
Depending on the protocol specifications of the device, those errors may simply be that there is no data to notify, or that something is running and not ready to receive data.
In that case, simply repeat Read/Write until it is valid.
Other devices may have strict state transitions defined by something like a finite state automaton, with command/response/error handling specifications.
Therefore, your question asked without specifying the connected device is meaningless.

Is a bus always necessary for dbus

I am trying to use the low level c-api of DBUS to implement a server-client over sockets. My question is .. is it necessary that a bus should be used always for dbus communication. And does a BUS just means an extra instance of dbus-daemon.
Yes, you need a bus for DBus communication. The bus is a communication channel, nothing more. More buses do not mean more instances of the Dbus daemon, it only means more communication channels.
In a system, you usually have one DBus daemon with one or more buses. Each bus is used for some class of messages (defined in your application).
2 applications can communicate via DBus, bypassing the daemon, by specifying the name of the client to which you want to send the signal/method (the DBus standard allows it). However, I don't think there is a DBus binding that offers this feature. But if you want to use the raw C API of the DBus, you can implement it yourself. You can check this discussion for more information on the topic.
Not sure about C API, but you can have client and server connecting directly using my node.js dbus implementation. Here is an example

Implementing correct inter-module synchronization in Linux kernel

I'm implementing a custom serial bus driver for a certain ARM-based Linux board (a custom UART driver, actually). This driver shall enable communication with a certain MCU on the other end of the bus via a custom protocol. The driver will not (and actually must not) expose any of its functions to the userspace, nor it is possible to implement it in userspace at all (hence, the need for the custom driver instead of using the stock TTY subsystem).
The driver will implement the communication protocol and UART reads/writes, and it has to export a set of higher-level functions to its users to allow them to communicate with the MCU (e.g. read_register(), drive_gpios(), all this stuff). There will be only one user of this module.
The calling module will have to wait for the completion of the operations (the aforementioned read_register() and others). I'm currently considering using semaphores: the user module will call my driver's function, which will initiate the transfers and wait on a semaphore; the IRQ handler of my driver will send requests to the MCU and read the answers, and, when done, post to the semaphore, thus waking up the calling module. But I'm not really familiar with kernel programming, and I'm baffled by the multitude of possible alternative implementations (tasklets? wait queues?).
The question is: is my semaphore-based approach OK, or too naïve? What are the possible alternatives? Are there any pitfalls I may be missing?
Traditionally IRQ handling in Linux is done in two parts:
So called "upper-half" is actual working in IRQ context (IRQ handler itself). This part must exit as fast as possible. So it basically checks interrupt source and then starts bottom-half.
"Bottom-half". It may be implemented as work queue. It is where actual job is done. It runs in normal context, so it can use blocking functions, etc.
If you only want to wait for IRQ in your worker thread, better to use special object called completion. It is exactly created for this task.

C Multithreading Deadlock's Thread Events

I am trying to perform multithreading on a socket in C in order to develop a connector between two different software applications. I would like it to work in the following manner. One piece of software will start running as the server, it will be performing a variety of functions including listening for a socket connection on a designated port. This software will function by it self and only use data from the connected network socket when it is established and receiving reliable data. So for this piece I would like to be able to listen to a connection, and when one is made fork a process and when data is received from this socket set some variable that will be used by some other update thread to notify it that it has these extra precision information that can be considered. On the other side of this equation I want to create a program that when it boots up will attempt to connect to the port of the other application, once this connects it will then simply call a function that will send out the information in non blocking fashion. My whole goal is to create a connector that will allow the programmers of the other two pieces of code to feel as tho they aren't dealing with a socket what so ever.
I have been able to get multi threaded socket communication going but I am now trying to modify this so it will be usable as I have described and I am confused as to how to avoid multiple access to that variable that will notify the system on the server side that the data has arrived as well as create the non-blocking interaction on the client side. Any help ill be appreciated.
-TJ
The question is not so clear to me, but if you need to make different pieces of software talking easily you can consider using a framework message library like ZeroMQ www.zeromq.org
It seems like you have a double producer-consumer problem here:
Client side Server
producer -> sender thread -> receiver thread -> consumer thread
In this case, the most useful data structure to use is a blocking queue on both sides, like intel TBB's concurrent_bounded_queue.
This allows you to post tasks from one thread and have another thread pull the data when it's available in a thread-safe manner.

Use of Listen() sys call in a multi threaded TCP server

I am in the middle of a multi-threaded TCP server design using Berkely SOCKET API under linux in system independent C language. The Server has to perform I/O multiplexing as the server is a centralized controller that manages the clients (that maintain a persistent connection with the server forever (unless a machine on which client is running fails etc)). The server needs to handle a minimum of 500 clients.
I have a 16 core machine, what I want is that I spawn 16 threads(one per core) and a main thread. The main thread will listen() to the connections and then dispatch each connection on the queue list to a thread which will then call accept() and then use the select() sys call to perform I/O multiplexing. Now the problem is how do I know that when to dispatch a thread to call accept() . I mean how do I find out in the main thread that there is a connection pending at the listen() so that I can assign a thread to handle that connection. All help much appreciated.
Thanks.
The listen() function call prepares a socket to accept incoming connections. You then use select() on that socket and get a notification that a new connection has arrived. You then call accept on the server socket and a new socket id will be returned. If you like you can then pass that socket id onto your thread.
What I would do is have a single thread for accepting connections and receiving data which then dispatches the data to a queue as a work item for processing.
Note that if each of your 16 threads is going to be running select (or poll, or whatever) anyway, there is no problem with them all adding the server socket to their select sets.
More than one may wake when the server socket has in incoming connection, but only one will successfully call accept, so it should work.
Pro: easy to code.
Con:
naive implementation doesn't balance load (would need eg. global
stats on number of accepted sockets handled by each thread, with
high-load threads removing the server socket from their select sets)
thundering herd behaviour could be problematic at high accept rates
epoll or aio/asio. I suspect you got no replies to your earlier post because you didn't specify linux when you asked for a scalable high-performnce solution. Asynchronous solutions on different OS are implemented with substantial kernel support and linux aio, Windows IOCP etc. are different enough that 'system independent' does not really apply - nobody could give you an answer.
Now that you have narrowed the OS down to linux, look up the appropriate asynchronous solutions.

Resources