Does web3js have a similar override option as web3py? - web3js

I was reading about state override in this article, where you can essentially alter the code of a called contract to do what your own, locally written contract dictates. No deploys occur, and no writes to the blockchain are possible, but it ends up being one option to execute certain kinds of reads very efficiently
Unfortunately, most of my codebase is in TS, and I haven't seen any of this in the web3js docs. Anyone know a way to set override params in web3js?

All readonly (view and pure) functions should be callable without this, but if you care about simulating transactions, callStatic (https://docs.ethers.io/v5/single-page/#/v5/api/contract/contract/-%23-contract-callStatic) will be your friend.

Related

How to make a scenario that depends on another scenario using cucumber, selenium and java

I am learning cucumber through some tutorials and there is something that I don't know how to do.. I need to make a scenario that depends on another scenario (for example log out scenario I have to be logged in first before logging out) so what should I do? should I write steps of login at log out scenario (at the feature file) or there is a way to call the whole log in scenario at the log out scenario
also I need to know should I set up driver before each scenario and quit the driver after each scenario?
There is no support for creating a scenario that depends on another scenario in Cucumber-JVM. I think still is supported in the Ruby implementation in Cucumber. It is, however, a practice that is dangerous. The support for calling a scenario from another scenario will not be supported in future versions of Cucumber.
That said, how do you solve your problem when you want to reuse functionality? You mention logout, how do you handle that when many scenarios requires the state to be logged out for a user?
The solution is to implement the functionality in a helper method or helper class that each step that requires a user to be logged out calls.
This allow each scenario to be independent of all other scenarios. This in turn will allow you to run the scenarios in a random order. I don't think the execution order for the scenarios is guaranteed. I definitely know that it is discussed to have the JUnit runner to run scenarios in a random order just to enforce the habit of not having scenarios depend on other scenarios.
Your other question, how to set up WebDriver before a scenario and how to tear it down, is solved using the Before and After hooks in Cucumber. When using them, be careful not to import the JUnit version of Before and After.
Take a look into cucumber hooks, this allows you to set up global 'before' and 'after' steps, which will run for every scenario without having to specify them in your feature files.
Because they run for every scenario, they're ideal for something like initialising the driver at the start of each test. It may be suitable for running your logon, but if there's a chance you'll have a scenario which doesn't involve logging on then it wouldn't be the way to go (alternative further down). The same applies for the after scenario, which is where you could perform the log off and shut down your driver. As an example:
/**
* Before each scenario, initialise webDriver.
*/
#Before
public void beforeScenario() {
this.application.initialiseWebDriver();
}
/**
* After each scenario, quit the web driver.
*/
#After
public void afterScenario() {
this.log.trace("afterScenario");
this.application.quitBrowser();
}
In my example, I'm simply starting the driver in the before scenario, and closing it in the after, But in theory these before and after methods could contain anything, you just need to have them in your step definitions class and annotate them with the '#Before' and '#After' tags as shown.
As well as these, you can also have multiple before and after tags which you can call by tagging the scenario. As an example:
/**
* Something to do after certain scenarios.
*/
#After("#doAfterMethod")
public void afterMethod() {
this.application.afterThing();
}
You can set up something like this in your step defs, and as standard it won't run. However, you can tag your scenario with '#doAfterMethod' and it will run for the tagged scenarios, which makes this good for a common scenario that you will need at the end of tests, but not all of them. The same would work for methods to run before a scenario, just change the '#After' to '#Before'.
Bear in mind that if you do use these, the global Before and After (so in this example the driver initialisation and quitting) will always be the first and last things to run, with any other before/afters in between them and the scenario.
Further Reading:
https://github.com/cucumber/cucumber/wiki/Hooks
https://zsoltfabok.com/blog/2012/09/cucumber-jvm-hooks/
You can set test dependency with qaf bdd. You can use dependsOnMethods or dependsOnGroups in scenario meta-data to set dependency same as in TestNG, because qaf-BDD is TestNG BDD implementation.

#Singleton vs #ApplicationScope

For a project I need to have a unique ID generator. So I thought about a Singleton with synchronized methods.
Since a Singleton following the traditional Singleton pattern (private static instance) is shared accross Sessions, I'm wondering if the #Singleton Annotation is working the same way?
The documentation says: Identifies a type that the injector only instantiates once.
Does it mean, that a #Singleton will be independent per User Session (which is bad for an id-generator)? Should I prefer an old school Singleton with Class.getInstance() over an Injection of an #Singleton-Bean?
Or should I use neither nor and provide the Service within an #ApplicationScoped bean?
it musst be guaranteed that only ONE thread, independent of the user session can access the method to generate the next id. (It's not solvable with auto-increment database ids)
Edit: JSF 2.2, CDI and javax.inject.* i'm talking about :)
All those kinds of singletons (static, #javax.inject.Singleton, #javax.ejb.Singleton and #javax.enterprise.context.ApplicationScoped) are created once per JVM.
An object that is created once per user session must be annotated with #javax.enterprise.context.SessionScoped so no, singletons will not be instantiated per user session.
Notice that there are two #Singleton annotations, one in javax.inject and the other in the javax.ejb package. I'm referring to them by their fully-qualified names to avoid confusion.
The differences between all those singletons are subtle and I'm not sure I know all the implications, but a few come to mind:
#javax.ejb.Singleton is managed by the EJB container and so it can handle transactions (#javax.ejb.TransactionAttribute), read/write locking and time-outs (#javax.ejb.Lock, #javax.ejb.AccessTimeout), application startup (#javax.ejb.Startup, #javax.ejb.DependsOn) and so on.
#javax.enterprise.context.ApplicationScoped is managed by the CDI container, so you won't have the transaction and locking features that EJB has (unless you use a post-1.0 CDI that has added transactions), but you still have lots of nice things such as #javax.enterprise.inject.Produces, #javax.annotation.PostConstruct, #javax.inject.Named, #javax.enterprise.inject.Disposes (but many of these features are available to EJBs too).
#javax.inject.Singleton is similar to #ApplicationScoped, except that there is no proxy object (clients will have a reference to the object directly). There will be less indirection to reach the real object, but this might cause some issues related to serialization (see this: http://docs.jboss.org/weld/reference/latest-2.2/en-US/html_single/#_the_singleton_pseudo_scope)
A plain static field is simple and just works, but it's controlled by the class loader so in order to understand how/when they are instantiated and garbage collected (if ever), you will need to understand how class loaders work and how your application server manages its class loaders (when restarting, redeploying, etc.). See this question for more details.
javax.inject.Singleton - When used on your bean, you have to implement writeResolve() and readReplace to avoid any serialization issues. Use it judiciously based on what your bean actually has in it.
javax.enterprise.context.ApplicationScoped - Allows the container to proxy the bean and take care of serialization process automatically. This is recommended to avoid unprecedented issues.
For More information refer this page number 45.

EF Code First / WPF Application architecture guidance needed

I'm working on a 2-tier WPF/EF Code First application. I did a lot of googling but couldn't find a sample implementation of what I was looking for... was hoping that someone on this forum could help me out. Here are the requirements:
On Application Start up
Open a DBContext
Cache the reference data in various maps/lists when the application starts
Close Context.
When user opens a form
Open a DBContext (I'm using UnitOfWork pattern here)
Fetch a fresh copy of Entity from context for Editing.
Call SaveChanges() when Save button is hit.
Close the Context.
The problem manifests when I use an object from Cache to change a navigation property.
e.g. use a drop down (backed by cache which was created using a different DBContext) to set Department navigation property.
The UnitOfWork either throws an exception saying entity was loaded in another DBContext (When Department is lazy loaded DynamicProxy) or inserts a new row in Department table.
I couldn't find even a single example where reference data was being cached... I can't believe that no one came across this issue. Either I'm not looking in the right place or not using the right keywords.
I hope this is doable using EF. I'd appreciate if you can share your experiences or post some references.
I'm kinda new to this so would like to avoid using too many frameworks and just stick to POCO with WPF/EF stack.
Try to attach your cached item (probably, you'd make a clone before attaching):
var existingUnicorn = GetMyExistingUnicorn();
using (var context = new UnicornsContext())
{
context.Unicorns.Attach(existingUnicorn);
context.SaveChanges();
}
Refer to Using DbContext... article.
You mention you are using WPF for this, in that case you don't necessarily have to open a new DBContext every time you want to interact with the domain layer. (Apologies if this goes against UoW that you are keen on using)
Personally I have been using code-first development for a desktop application, and I have found that pooling the contexts (and therefore the connection) prevents this problem, and hasn't led to any problems thus far.
In principle, as soon as the application is launched, a main Context object is opened for the main UI thread, and stays open throughout the duration of the application lifetime. It is stored statically, and is retrieved by any Repository class when they are used.
For multi-threading scenarios, any background threads are free to open up additional contexts and use them in Repositories to prevent any race conditions.
If you were to adopt this approach, you would find that as all repositories share the same context, there are no issues arising from object context tracking.
I ended up defining int foreign key property in addition to navigation.
In my application I only modify the int property and use the navigation property for displaying the details (read only controls).
While this works it makes the application a little fragile and sometimes inconsistent.
although this blog claims that the FK & Navi properties are synced by EF but I couldn't get it to work.
http://coding.abel.nu/2012/03/ef-code-first-navigation-properties-and-foreign-keys

How does the Composite C1 architecture work?

Can anyone give a high level description of what is going on in the Composite C1 core? In particular I am interested in knowing how the plugin architecture works and what the core components are of the system i.e. when a request arrives what is happening in the architecture. The description doesn't have to be too verbose just a list of steps and the classes involved.
Hopefully one of the core development team would enlighten me... and maybe publish some more API (hint hint more class documentation please).
From request to rendered page
The concrete path a request takes depends on the version of C1 you're using, since it was changed to use Routing in version 2.1.2. So lets see
< 2.1.2
Composite.Core.WebClient.Renderings.RequestInterceptorHttpModule will intercept all incoming requests and figure out if the requested path correspond to a valid C1 page. If it does, the url will be rewritten to the C1 page handler ~/Rendererings/Page.aspx
2.1.1
Composite.Core.Routing.Routes.Register() adds a C1 page route (Composite.Core.Routing.Pages.C1PageRoute) to the Routes-collection that looks at the incoming path, figures out if its a valid C1 page. If it is, it returns an instance of ~/Rendererings/Page.aspx ready to be executed.
Okay, so now we have an instance of a IHttpHandler ready to make up the page to be returned to the client. The actual code for the IHttpHandler is easy to see since its located in ~/Renderers/Page.aspx.cs.
OnPreInit
Here we're figuring out which Page Id and which language that was requested and looking at whether we're in preview mode or not, which datascope etc.
OnInit
Now we're fetching the content from each Content Placeholder of our page, and excuting its functions it may contain. Its done by calling Composite.Core.WebClient.Renderings.Page.PageRenderer.Render passing the current page and our placeholders. Internally it will call the method ExecuteFunctions which will run through the content and recursively resolve C1 function elements (<f:function />), execute them and replace the element with the functions output. This will be done until there are no more function elements in the content in case functions them selves output other functions.
Now the whole content is wrapped in a Asp.Net WebForms control, and inserted into our WebForms page. Since C1 functions can return WebForms controls like UserControl etc., this is necessary for them to work correctly and trigger the Event Lifecycle of WebForms.
And, that's basically it. Rendering of a requested page is very simple and very extendable. For instance is there an extension that enables the usage of MasterPages which simply hooks into this rendering flow very elegantly. And because we're using Routing to map which handler to use, its also possible to forget about ~/Rendering/Page.aspx and just return a MvcHandler if your a Mvc fanatic.
API
Now, when it comes to the more core API's there are many, depending on what you want to do. But you can be pretty sure, no matter what there is the necessary ones to get the job done.
At the deep end we have the Data Layer which most other API's and facades are centered around. This means you can do most things working with the raw data, instead of going through facades all the time. This is possible since most configuration of C1 is done by using its own data layer to store configuration.
The Composite C1 core group have yet to validate/refactor and document all the API's in the system and hence operate with the concept of 'a public API' and what can become an API when the demand is there. The latter is a pretty darn stable API, but without guarantees.
The public API documentation is online at http://api.composite.net/
Functions
Functions is a fundamental part of C1 and is a technique to abstract logic from execution. Basically everything that either performs a action or returns some data/string/values can be candidates for functions. At the lowest level a function is a .Net class implementing the IFunction interface, but luckily there are many easier ways to work with it. Out of the box C1 supports functions defined as XSLT templates, C# methods or Sql. There are also community support for writing functions using Razor or having ASP.Net UserControls (.ascx files) to be functions.
Since all functions are registered in C1 during system startup, we use the Composite.Functions.FunctionFacade to execute whatever function we know the name of. Use the GetFunction to get a reference to a function, and then Execute to execute it and get a return value. Functions can take parameters which are passed as real .Net objects when executing a function. There is also full support for calling functions with Xml markup using the <f:function /> element, meaning that editors, designers, template makers etc. easily can access a wealth of functionality without having to know how to write .Net code.
Read more about functions here http://users.composite.net/C1/Functions.aspx and how to use ie Razor to make functions here http://docs.composite.net/C1/ASP-NET/Razor-Functions.aspx
Globalization and Localization
C1 has full multi-language support in the core. Composite.Core.Localization.LocalizationFacade is used for managing the installed locales in the system; querying, adding and removing. Locales can be whatever CultureInfo object is known by your system.
Composite.Core.ResourceSystem.StringResourceSystemFacade is used for getting strings at runtime that matches the CultureInfo your request is running in. Use this, instead of hardcoding strings on your pages or in your templates.
Read more about Localization here http://docs.composite.net/C1/HTML/C1-Localization.aspx
Global events
Composite.C1Console.Events.GlobalEventSystemFacade is important to know if you need to keep track on when the system is shutting down so you can make last-minute changes. Since C1 is highly multithreaded its easy to write extensions and modules for C1 that are multithreaded as well, taking advantage of multi core systems and parallelization and therefor its also crucial to shut down ones threads in a proper manner. The GlobalEventSystemFacade helps you do that.
Startup events
If you write plug-ins these can have a custom factory. Other code can use the ApplicationStartupAttribute attribute to get called by the Composite C1 core when the web app start up.
Data events
You can subscribe to data add, edit and delete events (pre and post) using the static methods on Composite.Data.DataEvents<T>. To attach to these events when the system start up, use the ApplicationStartupAttribute attribute.
Data
Composite.Core.Threading.ThreadDataManager is important if your accessing the Data Layer outside of a corresponding C1 Page request. This could be a custom handler that just has to feed all newest news as a Rss feed, or your maybe writing a console application. In these cases, always remember to wrap your code that accesses the data like this
using(Composite.Core.Threading.ThreadDataManager.EnsureInitialize())
{
// Code that works with C1 data layer goes here
}
For accessing and manipulating data its recommended NOT to use the DataFacade class, but wrap all code that gets or updates or deletes or adds data like this
using(var data = new DataConnection())
{
// Do things with data
}
IO
When working with files and directories its important to use the C1 equivalent classes Composite.Core.IO.C1File and Composite.Core.IO.C1Directory to .Net's File and Directory. This is due to the nature where C1 can be hosted on Azure, where you might not have access to the filesystem in the same way as you have on a normal Windows Server. By using the C1's File and Directory wrappers you can be sure that code you write will be able to run on Azure as well.
C1 Console
The console is a whole subject on itself and has many many API's.
You can create your own trees using Composite.C1Console.Trees.TreeFacade or Composite.C1Console.Elements.ElementFacade and implementing a Composite.C1Console.Elements.Plugins.ElementProvider.IElementProvider.
You can use the Composite.C1Console.Events.ConsoleMessageQueueFacade to send messages from the server to the client to make it do things like open a message box, refreshing a tree, set focus on a specific element, open a new tab etc. etc.
Composite.C1Console.Workflow.WorkflowFacade is used for getting instances of specific workflows and interacting with them. Workflows is a very fundamental part of C1 and is the way multi-step operations are defined and executed. This makes it possible to save state of operation so ie. a 10 step wizard is persisted even if the server restarts or anything else unexpected happens. Workflows are build using Windows Workflow Foundation, so are you familiar with this, you should be feeling at home
There is also a wealth of javascript facades and methods you can hook into when writing extensions to the Console. Much more than i could ever cover here so i will refrain myself from even getting started on that subject here.
composite.config
A fundamental part of C1 is providers, almost everything is made up of providers, even much of the core functionality. Everything in the console from Perspectives to Trees and elements and actions are feeded into C1 with providers. All the standard functions, the datalayer and all the widgets for use with the Function Call editor is feeded into C1 with providers. All the localisation strings for use with the Resources, users and permissions, url formatters etc. is all providers.
Composite.Data.Plugins.DataProviderConfiguration
Here all providers that can respond to the methods on DataFacade, Get, Update, Delete, Add etc. are registered. Every provider informs the system which interfaces it can interact with and C1 makes sure to route all requests for specific interfaces to their respective dataproviders.
Composite.C1Console.Elements.Plugins.ElementProviderConfiguration
Here we're defining the perspectives and the trees inside the Console. All the standard perspectives you see when you start the Console the first time are configured here, no magic or black box involved.
Composite.C1Console.Elements.Plugins.ElementActionProviderConfiguration
Action providers are able to add new menuitems to all elements in the system, based on their EntityToken. This is very powerful when you want to add new functionality to existing content like versioning, extranet security, custom cut/paste and the list goes on.
Composite.C1Console.Security.Plugins.LoginProviderConfiguration
A LoginProvider is what the C1 console will use to authenticate a user and let you log in or not. Unfortunately this isn't very open but with some reflection you should be all set.
Composite.Functions.Plugins.FunctionProviderConfiguration
Composite C1 will use all the registered FunctionProviders to populate its internal list of functions on system startup.
Composite.Functions.Plugins.WidgetFunctionProviderConfiguration
WidgetProviders are used in things like the Function Call Editor or in Forms Markup to render custom UI for selecting data.
Composite.Functions.Plugins.XslExtensionsProviderConfiguration
Custom extensions for use in XSLT templates are registered here
And then we have a few sections for pure configuration, like caching or what to to parallelize but its not as interesting as the providers.
Defining and using sections
Sections in composite.config, and other related .config files are completely standard .Net configuration and obeys the rules thereof. That means that to be able to use a custom element, like ie. Composite.Functions.Plugins.WidgetFunctionProviderConfiguration it has to be defined as a section. A section has a name and refers to a type that would inherit from System.Configuration.ConfigurationSection. Composite uses the Microsoft Enterprise Libraries for handling most of these common things like configuration and logging and validation and therefor all Composites sections inherit from Microsoft.Practices.EnterpriseLibrary.Common.Configuration.SerializableConfigurationSection. Now, this type just has to have properties for all the elements we want to be able to define in the .config-file, and .Net will automatically make sure to wire things up for us.
If you want to access configuration for a particular section you would call Composite.Core.Configuration.ConfigurationServices.ConfigurationSource.GetSection(".. section name") and cast it to your specific type and your good to go.
Adding extra properties to already defined sections
Normally .Net would complain if you write elements or attributes in the .config files that aren't recognized by the type responsible for the section or for the element. This makes it hard to write a truly flexible module-system where external authors can add specific configuration options to their providers, and therefor we have the notion of a Assembler. Its a ConfigurationElement class with a Microsoft.Practices.EnterpriseLibrary.Common.Configuration.ObjectBuilder.AssemblerAttribute attribute assigned to it that in turns takes a Microsoft.Practices.EnterpriseLibrary.Common.Configuration.ObjectBuilder.IAssembler interface as argument that is responsible for getting these custom attributes and values from the element in the .config file and emit usable object from it. This way .Net won't complain about an invalid .config file, since we inject a ConfigurationElement object that has properties for all our custom attributes, and we can get hold of them when reading the configuration through the IAssembler
Slides
Some overview slides can be found on these lins
Overview
Extensibility points
Page request handling
Function system
Data system
Data type system
Inspiration and examples
The C1Contrib project on GitHub is a very good introduction how to interact with the different parts of C1. Its a collection of small packages, that can be used as it is, or for inspiration. There are packages that manipulates with dynamic types to enable interface inheritance. Other packages uses the javascript api in the console, while others show how to make Function Providers, Trees and hook commands unto existing elements. There is even examples of how to manipulate with the Soap webservice communication going on between client and server so you can make it do things the way you want it. And the list goes on.

I'm unsure as to what is the set-in-stone way to access databases

I have quite a deal of experience programing with VB6, VB.NET, C# so on and have used ADO, then SubSonic and now I am learning nHibernate since most of the prospective jobs I can go for use nHibernate.
The thing is, I have been programming based on what I have been taught, read or come to understand as best practice. Recently, someone through a spanner in the works and had me thinking. Up until now, I have been accessing the database(s) from both the core applcation and attached DLLs that I write.
What this persons said ends as follows and hence my question:
I can tell you
that you wouldn't normally want to do this - an external class library shouldn't have access to the database
What I was trying to do was to have a shared/static class for nHibernate sessions that could be consumed in both the global scope of the app and from any dll. This class was to be in a "core" DLL which all dlls and the application reference. Like I said I'm learning nHibernate so it may not be the way.
To say i'm questioning my database access methods is putting it lightly.
Can anyone put me straight on this?
Edit:
I suppose looking at what has been commented already, it depends on how the database is being accessed. I would tend never to put username/password credentials etc hardcoded in any DLLs for any means.
More specifically, my query is in relation to NHibernate's sessions. I have a static class, an helper class, which is called at application start and the new session is then created and attached to the current context, in the case of web applications, and then whenever I need the session I call "GetCurrentSession". This static class is in the "core" dll and can be accessed with any DLL etc that references. This behaviour is intended. My only question is is this ok? Should I be doing it another way?
A couple of reasons would be
Access to the database, how do you cover off username/password
sharing the DLL, a "bad" application may get hold of your DLL and link with it to get access to your database.
Saying this, if you have proper security on files, etc. then I would have thought using a DLL would probably be a reasonable way to go.
Assuming that the username and password are not stored directly in the DLL (but maybe passed as parameters, or passed as a complete connection object) this isn't so bad.
The possible bad practice here might be accessing the same database for the same purpose from different places - core app and DLL. This could get confusing quickly to a new developer, unless the separation is clear and logical.
Myself, I might try to move ALL (or almost all) data access to a DLL just for that purpose, then have the serious application logic (or as much as possible) in the core app or yet another DLL.

Resources