Why we need analysis_export when we have analysis_port - uvm

We usually need analysis_export for making data transfer from analysis_port to analysis_imp. They serve as a data transfer objects from ports to implementations, as we cannot connect analysis_imp to another analysis_imp.
However the analysis_port-s can be connected to other analysis_port-s.
So my questions is, why we need analysis_export, when instead of analysis_export we can just use analysis_port?

Looks like the intention might have been the export interfaces are to be used for connections purpose only and the port interfaces are to be used to send the data -- port.write(data); [an export need not implement the write function ]
BUT the analysis_export and analysis_port seem to have very similar in implementation.
it also look like that can be also interchanged. The only difference it is the MASK/ type bit which say which type of interface it is - export / port . Other that this the interface implementation appears to be same.
The main difference is when the interfaces are being connected (connect function ) a check is implemented to the way connection can be made. port-to-port , port-to-export , port-to-imp , export-to-export , export-to-imp .
These might be used to ensure that we use export to propagate interfaces and finally connect them to some implementation.
But from the current implementation it looks like both the port and export have write functions which can be called and they can be used interchangeably even for connections.
The only catch is once a port is connected to an export , it can only connect to other exports and has to terminate at a implementation. [ for some reason only a port-port connections does not generate any run time ]
Also is it possible at some time in the past they may might had different implementations [ just speculating , someone who has followed the history of UVM could answer that. ]

Related

How to remove section in JSON in LogicApp

Compose - removeProperty(variables('Message')['Appointment'],'CustomerInfo')
What I want to see if the following.
I managed to recreate your issue without drama and unfortunately, I couldn't find a way to use the removeProperty function to make it work.
You have to call the function at the level it expects so it can remove a single named property and therefore, it only returns the level the function is called at which is, obviously, a problem.
This may not be the approach for you but to overcome this shortcoming, I used the inline Javascript action to do the work.
If you've never used the action before, you need to make sure set you flow up to use an Integration account. You can get one in the free tier so it doesn't cost you anything but may be limiting depending on the workload AND it's not supported for production workloads.
https://learn.microsoft.com/en-us/azure/logic-apps/logic-apps-enterprise-integration-create-integration-account?tabs=azure-portal%2Cconsumption
This did give me the desired outcome though ...
Firstly, I have initialized variable and then used compose action with removeproperty(variables('emo')['appointment'],'customerInfo') as below:
Then I again used compose as below with removeproperty(variables('emo'),'appointment'):
Then I used Compose 3 for combing both composes as below using union(outputs('Compose_2'),outputs('Compose')) :
Output:
OR:
If you have integrated account Reference. Hope this clears all your doubts.

Can I query a Near contract for its method signatures?

Is there a way to query what methods are offered by a given NEAR contract? (So that one could do autodiscovery of some standard interface, for instance.) Or do you have to just know the method signatures already before you can interact with a contract?
No not yet. Currently all contract methods have the same signature. () -> () No arguments and nothing is returned. Each method has a wrapper function that deserializes the input bytes from a host; calls the method; and serializes the return value and passes the bytes back to the host.
This is done with input and value_return. See input..
There are plans to include the actual signatures of the methods in the binary in a special section, which would solve this issue.
NEP-351 was recently approved, which provides a mechanism for contracts to expose all standards they implement. However, it is up to contract developers to follow this NEP. When integrated into the main SDK, I presume most will.
Alternatively, there is a proposal to create a global registry as a smart contract that provides this information.
Currently, there is not.
You will need to know what contract methods are available in order to interact with a smart contract deployed on NEAR. Hopefully, the ability to query available methods will be added in the near future.
I suppose you can just include a method in your own contracts that returns the other method signatures in some useful format: json or whatever
you would have to make sure that it stays current by maybe writing some unit tests that use this method to exercise all others
I suppose this interface (method and unit tests) can be standardized as an NEP in the short term until our interface becomes discoverable. any contracts that adhere to this NEP must include this "tested reflection method" or "documentation method" or whatever it would be called

SQLDelight v1.4 not generating interface

Since v1.4, SQLDelight generated data class only.
Before, the tool generated interface and a default implementation of this interface.
That was easy to compose objects with associated projections.
Is there any change to get these interface back ?
see this answer: https://github.com/cashapp/sqldelight/pull/1698#issuecomment-646306522
essentially to keep the backwards compatibility i would copy and paste the old interfaces yourself. We made this change because that really should have been the original implementation but we needed interfaces to support autovalue, thats no longer the case so if there are still situations where you need interfaces they should probably be user code

Dbus glib interface design and usage

In our project we use dbus for Inter Process communication. We have one interface where all the methods that need to be exposed to other process are tied together. That is only one interface for all the methods . Is it good idea ? Is it better to group the methods in to different interface ? We have around 50 methods. I am not familiar with Object oriented languages. But I feel it is better to group them in to different interfaces.
What will be the advantage of splitting the methods under different interfaces ? I need some justification for grouping methods under different interfaces.
Note that dbus has auto code generator which generates the necessary class and methods when xml is given as input.
From a object-oriented perspective it's better to group messages in different interfaces according to their meaning. For a instant messaging software like pidgin you could have:
MyIPCInterface:
accountCreate(...)
accountList(...)
accountRemove(...)
messageSend(...)
messageReceived(...) * signal
statusChange(....)
statusChanged(...) * signal
but a better choice would be to separate this into different interfaces according to their meaning:
AccountManagerInterface
create(...)
list(...)
remove(...)
AccountInterface
sendMessage(...)
messageReceived(...) * signal
statusChange(...)
statusChanged(...) * signal
Of course, there are many other ways to design this but the main point is that when you receive the "messageReceived" signal for an AccountInterface object, you know what account "object" received the signal, better than that you are separating the concerns of who should manage accounts from who should manage the account objects.
There's much more to say about that but I hope this may help to clarify...

What are the most frequently used flow controls for handling protocol communication?

I am rewriting code to handle some embedded communications and right now the protocol handling is implemented in a While loop with a large case/switch statement. This method seems a little unwieldy. What are the most commonly used flow control methods for implementing communication protocols?
It sounds like the "while + switch/case" is a statemachine implementation. I believe that a well thought out statemachine is often the easiest and most readable way to implement a protocol.
When it comes to statemachines, breaking some of the traditional programming rules comes with the territory. Rules like "every function should be less than 25 lines" just don't work. One might even argue that statemachines are GOTOs in disguise.
For cases where you key off of a field in a protocol header to direct you to the next stage of processing for that protocol, arrays of function pointers can be used. You use the value from the protocol header to index into the array and call the function for that protocol.
You must handle all possible values in this array, even those which are not valid. Eventually you will get a packet containing the invalid value, either because someone is trying an attack or because a future rev of the protocol adds new values.
If it is all one protocol being handled then a switch/case statement may be your best bet. However you should break all the individual message handlers into their own functions.
If your switch statement contains any code to actually handle the messages than you would be better off breaking them out.
If it is handling multiple similar protocols you could create a class to handle each one based off the same abstract class and when the connection comes in you could determine which protocol it is and create an instance of the appropriate handler class to decode and handle the communications.
I would think this depends largely on the language you are using, and what sort of data set objects you have available to you.
In python, for example, you could create a Dictionary object of all the different handling statements, and just iterate through that to find the right method/function to call.
Case/Switch statements aren't bad things, but if they get huge(like they can with massive amounts of protocol handlers) then they can become unwieldy to work with.

Resources