Test Function in MVC Project - sql-server

So I want to test one of my Functions in my Web Project, but it's not actually connected to anything in the project yet (someone else is working on that part). The Function takes in an "ID" field, goes off and does some queries and gets some data, performs some calculations on it, and then writes a bunch of lines to a FileStream and returns that stream. I pretty much just want to test it by having it write the file to my own computer locally, and working with that file directory after the Function completes.
So my question is mainly:
1) How do I call this Function just for testing purposes so I can test all the queries/calculations/File writes, etc without it being connected to another part of the application just yet.
2) How can I change the 'Return fs' for the FileStream to write to my own computer locally to view the file that has been written.
Thanks guys!

To make your function testable you need to isolate all your dependencies and replace them in your test with stubs mocks. You can achieve this by wrappers around the file system classes and making sure your data layers classes have interfaces. With this your code could look like:
public class Something
{
IDataProvider provider;
IFileSystem fileSystem;
public Something(IDataProvider provider, IFileSystem fileSystem)
{
this.provider = provider;
this.fileSystem = fileSystem;
}
void DoThing(int id)
{
// make database call to get data
var data = provider.GetData(id);
fileSystem.Write("someFilePath",data);
}
}
With this you can write a test as such (in this casing using Moq like syntax):
void SomeTest()
{
var mockDataProvider = new Mock<IDataProvider>();
var mockFileSystem = new Mock<IFileSystem>();
var something = new Something(mockDataProvider.Object, mockFileSystem.Object);
var data = "someData";
mockDataProvider.Setup(x => x.GetData(5)).Return(data);
DoThing(5);
mockFileSystem.Verify(x => x.Write("someFilePath",data);
}

You need to read up on Unit Testing as this solves your problem in so many ways - it would also introduce you to dependency injection and mocking, which would be a great way to handle your problem.
Here is an overview...
Set up your class so it accepts the data-access and file-writer in the constructor. You can then pass in mock or stub version of the data access and file writer so you don't physically need to connect to a database or write to the file system to test your code.
In the "real world" you pass in the genuine data access and file writer.
In "test world" you use something such as MOQ or Rhino Mocks to create a pretend version of the data access, this means you can predict what will come back from the data access every time you test as it isn't the real database, it's some data you have prepared. You can also create a pretend file-writer that doesn't actually need to write a real file.
You can then test your class in isolation.
Dependency Injection:
http://msdn.microsoft.com/en-us/magazine/cc163739.aspx
Moq
http://code.google.com/p/moq/

Related

Flink integration test(s) with Testcontainers

I have a simple Apache Flink job that looks very much like this:
public final class Application {
public static void main(final String... args) throws Exception {
final var env = StreamExecutionEnvironment.getExecutionEnvironment();
final var executionConfig = env.getConfig();
final var params = ParameterTool.fromArgs(args);
executionConfig.setGlobalJobParameters(params);
executionConfig.setParallelism(params.getInt("application.parallelism"));
final var source = KafkaSource.<CustomKafkaMessage>builder()
.setBootstrapServers(params.get("application.kafka.bootstrap-servers"))
.setGroupId(config.get("application.kafka.consumer.group-id"))
// .setStartingOffsets(OffsetsInitializer.committedOffsets(OffsetResetStrategy.EARLIEST))
.setStartingOffsets(OffsetsInitializer.earliest())
.setTopics(config.getString("application.kafka.listener.topics"))
.setValueOnlyDeserializer(new MessageDeserializationSchema())
.build();
env.fromSource(source, WatermarkStrategy.noWatermarks(), "custom.kafka-source")
.uid("custom.kafka-source")
.rebalance()
.flatMap(new CustomFlatMapFunction())
.uid("custom.flatmap-function")
.filter(new CustomFilterFunction())
.uid("custom.filter-function")
.addSink(new CustomDiscardSink()) // Will be a Kafka sink in the future
.uid("custom.discard-sink");
env.execute(config.get("application.job-name"));
}
}
Problem is that I would like to provide an integration test for the entire application — sort of like an end-to-end (set of) test(s) for the entire job. I'm using Testcontainers, but I'm not really sure how to move forward with this. For instance, this is how the test looks like (for now):
#Testcontainers
final class ApplicationTest {
private static final DockerImageName DOCKER_IMAGE = DockerImageName.parse("confluentinc/cp-kafka:7.0.1");
#Container
private static final KafkaContainer KAFKA_CONTAINER = new KafkaContainer(DOCKER_IMAGE);
#ClassRule // How come this work in JUnit Jupiter? :/
public static MiniClusterResource cluster;
#BeforeAll
static void init() {
KAFKA_CONTAINER.start();
// ...probably need to wait and create the topic(s) as well
final var config = new MiniClusterResourceConfiguration.Builder().setNumberSlotsPerTaskManager(2)
.setNumberTaskManagers(1)
.build();
cluster = new MiniClusterResource(config);
}
#Test
void main() throws Exception {
// new Application(); // ...what's next?
}
}
I'm not sure how to implement what's required to trigger the job as-is from that point on. Basically, I would like to execute what was defined before, without (almost) any modifications — I've seen plenty of examples that practically build the entire job again, so that's not an option.
Can somebody provide any pointers here?
MessageDeserializationSchema is unbounded, so isEndOfStream returns false. Not sure if that's an impediment.
In order to make the pipeline more testable, I suggest you create a method on your Application class that takes a source and a sink as parameters, and creates and executes the pipeline, using those connectors.
In your tests you can call that method with special sources and sinks that you use for testing. In particular, you will want to use a KafkaSource that uses .setBounded(...) in the tests so that it cleanly handles just the range of data intended for the test(s).
The solutions and tests for the Apache Flink training exercises are organized along these lines; for example, see RideCleansingSolution.java and RideCleansingIntegrationTest.java. These examples don't use kafka or test containers, but hopefully they'll still be helpful.
I would suggest you instrument your application as an opaque-box test by interacting with it through its public API. This can be done either as an out-process test (e.g. by running your application in a container as well, using Testcontainers) are as an in-process test (by creating your Application and calling its main() method).
Now in your comments you explained, that you want to check for the side-effects of interacting with your application (Kafka messages being published). To check this, connect to the KafkaContainer with your own KafkaConsumer from within the test and use a library such as Awaitiliy to wait until the messages have been received.

How to share Javascript Business Rules between Client and Server?

I am creating a MEAN stack and I want to clarify the below.
From the coding standards perspective I know that validations should be executed both at the client side and server side. What I would like to achieve is to execute the exact same validation code so that I do not repeat the code again. This is more like a shared code for client and server side.
so How can I have angular js and Express js invoke same .js file for performing validations? is it even possible?
Thanks!
You sure can do this. This approach is used by RemObjects DataAbstract (http://old.wiki.remobjects.com/wiki/Business_Rules_Scripting_API). The principle here is to define business-rules that will either apply on the client and on the server, or the server only. You will almost never have to check for business-rules ONLY on the client, because you can never "trust" the client to check your business rules.
CQRS and DDD are two architectural principles that could help you here. Domain Driven Design will kind of "clean" or "refine" your code, pushing the infrastructure away from the core "domain" logic. And business rules apply only in the domain, so it's a good idea the keep the domain isolated from the rest.
Command-Query-Responsability-Segretation. I like this one a lot. Basically, you define a set of commands that will be validated before they are applied. There's no more machine-like code that looks like Model.Set('a', 2). Your code, using this principle, will look like MyUnderstandableBusinessObject.UnderstandableCommand(aFriendlyArgument). When it comes to applying business rules, this is very handy that your actual commands reflect the use cases of your domain.
I also always encounter this problem when I work on node.js / javascript projects. The problem is that you do not have a standardized ORM that can be understood by both the client AND the server. This is paradoxal, as node.js and the browser are running on the same language. When I was drawn towards Node.js, I told myself, man both client and server are running the same language, that's going to save sooo much time. But that was kind of false, as there are not that many mature and professional tools out there, even if npm is really active.
I wanted to build an ORM too that could be both understood by the client/server, and add a relational aspect to it (so that it was compatible with SQL) but I kind of abandoned the project. https://github.com/ludydoo/affinity
But, there are a couple of other solutions. Backbone is one, and it's lightweight.
The actual implementation of your business-rule checking here is what you are going to have to work on. You'll want to extract the "validation" part out of your model into another object that will be able to be shared. Something to get you started :
https://jsfiddle.net/ludydoo/y0otcvrf/
BusinessRuleRepository = function() {
this.rules = [];
}
BusinessRuleRepository.prototype.addRule = function(aModelClass, operation, callback) {
this.rules.push({
class: aModelClass,
operation: operation,
callback: callback
})
}
BusinessRuleRepository.prototype.validate = function(object, operation, args) {
_.forIn(this.rules, function(rule) {
if (object.constructor == rule.class && operation == rule.operation) {
rule.callback(object, args)
}
})
}
MyObject = function() {
this.a = 2;
}
MyObject.prototype.setA = function(value) {
aBusinessRuleRepo.validate(this, 'setA', arguments);
this.a = value;
}
// Creating the repository
var aBusinessRuleRepo = new BusinessRuleRepository();
//-------------------------------
// shared.js
var shared = function(aRepository) {
aRepository.addRule(MyObject, 'setA', function(object, args) {
if (args[0] < 0) {
throw 'Invalid value A';
}
})
}
if (aBusinessRuleRepo != undefined) {
shared(aBusinessRuleRepo);
}
//-------------------------------
// Creating the object
var aObject = new MyObject();
try {
aObject.setA(-1); // throws
} catch (err) {
alert('Shared Error : ' + err);
}
aObject.setA(2);
//-------------------------------
// server.js
var server = function(aRepository) {
aRepository.addRule(MyObject, 'setA', function(object, args) {
if (args[0] > 100) {
throw 'Invalid value A';
}
})
}
if (aBusinessRuleRepo != undefined) {
server(aBusinessRuleRepo);
}
//-------------------------------
// on server
try {
aObject.setA(200); // throws
} catch (err) {
alert('Server Error :' + err);
}
The first thing of all is to model and define your domain. You'll have a set of classes that represent your business objects, as well as methods that correspond th business-operations. (I would really go with CQRS for your case)
The model definition would be shared between the client and the server.
You would have to define two files, or two objects. Separated. ServerRules and SharedRules. Those will be a set of Repository.addRule() calls that will register you business rules in the repository. Your client will get the Shared.js business rules, and the server the Shared.js + Server.js business rules. Those business rules will always be applied on your objects this way.
The little example of code I shown you is very simple, and checks business rules only before the command is applied. Maybe you could add a parameter 'beforeCommand' and 'afterCommand' to check business rules before and after changed are made. Then, if you add the possibility of checking business rules after a command is applied, you must be able to rollback the changes (backbone has this functionality I think).
Good luck
You could automate this a little by automatically getting the name of the method you are in (Can I get the name of the currently running function in JavaScript?)
function checkBusinessRules(model, arguments){
businessRuleRepo.validate(model, getCalleeName, arguments);
}
Model.prototype.command = function(arg){
checkBusinessRules(this, arguments);
// perform logic
}
EDIT 2
A small detail i would like to correct on my first answer. Do not implement your business rules on property setters! Use business operation names instead :
You must make sure that you always set your model properties through methods. If you set your model properties directly by assigning a value, you're bypassing the whole business rule processor thing.
The cheap way is to do this through standard setters such as
SetMyProperty(value);
SetAnotherProperty(value);
This is kind of the low-level business rule logic (based on getters and setters). Then, your business rules will also be low-level. Which is kind of bad.
Better, you should do this through business understandable high-level method names such as
RegisterClient(client);
InvalidateMandate(mandate);
Then, your business rules become way more understandable and you'll almost have a good time implementing them.
BusinessRuleRepository.add(ModelClass, "RegisterClient", function(){
if (!Session.can('RegisterClient')) { fail('Unauthorized'); }
})

Grails 2.4.4 Multiple datasources, separate drivers, IntegrationSpec

I am attempting to use multiple datasources in a Grails 2.4.4 project. According to the docs, this should be possible:
http://www.grails.org/doc/2.4.4/guide/conf.html#multipleDatasources
My primary dataSource (the one I want to use for all domain classes) is using H2 at the moment, as configured by the default DataSource.groovy configuration. My second, read-only datasource is SQL Server, and I tried to declare it as follows at the top level of my DataSource.groovy config (shared by all environments):
ds {
pooled = true
dialect = "org.hibernate.dialect.SQLServer2008Dialect"
driverClassName = "net.sourceforge.jtds.jdbc.Driver"
url = "jdbc:jtds:sqlserver://myserver:1433/mydb;domain=mydomain;useNTLMv2=true;user=myuser"
dbCreate = "none"
}
(Don't let the URL throw you off - I'm just having to use Windows Auth with JTDS. I've tested this via third-party clients as well.)
I inject this into my service class and use it, and everything appears to hook up well:
def dataSource_ds
def serviceMethod(){
Sql ds = new Sql(dataSource_ds)
String query = "SELECT ... "
def results = ds.rows(query)
println "Results are ${results.size()}"
return "Some value"
}
But when I try to access this from an IntegrationSpec-backed Integration Test, I noticed that I was getting "schema not found" errors for valid schemas referred to by my query string, such as "dbo". And the stack trace of any errors from this setup looks like this:
org.h2.jdbc.JdbcSQLException: Schema "DBO" not found; SQL statement:
...
at org.h2.message.DbException.getJdbcSQLException(DbException.java:329)
at org.h2.message.DbException.get(DbException.java:169)
at org.h2.message.DbException.get(DbException.java:146)
at org.h2.command.Parser.readTableOrView(Parser.java:4774)
at org.h2.command.Parser.readTableFilter(Parser.java:1083)
at org.h2.command.Parser.parseSelectSimpleFromPart(Parser.java:1689)
at org.h2.command.Parser.parseSelectSimple(Parser.java:1796)
at org.h2.command.Parser.parseSelectSub(Parser.java:1683)
at org.h2.command.Parser.parseSelectUnion(Parser.java:1526)
at org.h2.command.Parser.parseSelect(Parser.java:1514)
at org.h2.command.Parser.parsePrepared(Parser.java:404)
at org.h2.command.Parser.parse(Parser.java:278)
at org.h2.command.Parser.parse(Parser.java:250)
at org.h2.command.Parser.prepareCommand(Parser.java:217)
at org.h2.engine.Session.prepareLocal(Session.java:414)
at org.h2.engine.Session.prepareCommand(Session.java:363)
...
Now why would THIS datasource be trying to use the H2 driver?
In case it's relevant, my Integration test looks like this:
void "serviceMethod" () {
when: "service method is called"
String response = myService.serviceMethod()
then: "we should get the appropriate text back"
response.equals("Some value")
}
If, in the Service class, I hard-code the connection using a constructor of the Groovy Sql object, the integration test works fine, and any stack traces go through the JTDS driver.But when I try to use the injected datasource, things are strange.
Any idea what I'm doing wrong here?
Just to close the loop on this and hopefully save someone pain on this oversight in the future:
Grails uses an in-memory database when running tests. Make sure to read up on the other differences between integration tests and production here:
http://www.grails.org/doc/latest/guide/testing.html#integrationTesting
This feature makes the use of external (read-only) datasources during any tests pretty interesting, but some of that is to be expected (a test which depends on an external datasource is not a very good test in the long run). I hope to refactor my app and its testing approach at some point (e.g., to use a simple DAO and mock that during the test), because I don't really care about asserting the contents of the external datasource from my app's tests.

How would I configure Effort Testing Tool to mock Entity Framework's DbContext withOut the actual SQL Server Database up and running?

Our team's application development involves using Effort Testing Tool to mock our Entity Framework's DbContext. However, it seems that Effort Testing Tool needs to be see the actual SQL Server Database that the application uses in order to mock our Entity Framework's DbContext which seems to going against proper Unit Testing principles.
The reason being that in order to unit test our application code by mocking anything related to Database connectivity ( for example Entity Framework's DbContext), we should Never need a Database to be up and running.
How would I configure Effort Testing Tool to mock Entity Framework's DbContext withOut the actual SQL Server Database up and running?
*
Update:
#gert-arnold We are using Entity Framework Model First approach to implement the back-end model and database.
The following excerpt is from the test code:
connection = Effort.EntityConnectionFactory.CreateTransient("name=NorthwindModel");
jsAudtMppngPrvdr = new BlahBlahAuditMappingProvider();
fctry = new BlahBlahDataContext(jsAudtMppngPrvdr, connection, false);
qryCtxt = new BlahBlahDataContext(connection, false);
audtCtxt = new BlahBlahAuditContext(connection, false);
mockedReptryCtxt = new BlahBlahDataContext(connection, false);
_repository = fctry.CreateRepository<Account>(mockedReptryCtxt, null);
_repositoryAccountRoleMaps = fctry.CreateRepository<AccountRoleMap>(null, _repository);
The "name=NorthwindModel" pertains to our edmx file which contains information about our Database tables
and their corresponding relationships.
If I remove the "name=NorthwindModel" by making the connection like the following line of code, I get an error stating that it expects an argument:
connection = Effort.EntityConnectionFactory.CreateTransient(); // throws error
Could you please explain how the aforementioned code should be rewritten?
You only need that connection string because Effort needs to know where the EDMX file is.
The EDMX file contains all information required for creating an inmemory store with an identical schema you have in your database. You have to specify a connection string only because I thought it would be convenient if the user didn't have to mess with EDMX paths.
If you check the implementation of the CreateTransient method you will see that it merely uses the connection string to get the metadata part of it.
public static EntityConnection CreateTransient(string entityConnectionString, IDataLoader dataLoader)
{
var metadata = GetEffortCompatibleMetadataWorkspace(ref entityConnectionString);
var connection = DbConnectionFactory.CreateTransient(dataLoader);
return CreateEntityConnection(metadata, connection);
}
private static MetadataWorkspace GetEffortCompatibleMetadataWorkspace(ref string entityConnectionString)
{
entityConnectionString = GetFullEntityConnectionString(entityConnectionString);
var connectionStringBuilder = new EntityConnectionStringBuilder(entityConnectionString);
return MetadataWorkspaceStore.GetMetadataWorkspace(
connectionStringBuilder.Metadata,
metadata => MetadataWorkspaceHelper.Rewrite(
metadata,
EffortProviderConfiguration.ProviderInvariantName,
EffortProviderManifestTokens.Version1));
}

How to dynamically discover all XAML files in all modules in a Silverlight prism app

Is there an easy way to dynamically discover all the XAMLs files within all the currently loaded modules (specifically of a Silverlight Prism application)? I am sure this is possible, but not sure where to start.
This has to occur on the Silverlight client: We could of course parse the projects on the dev machine, but that would reduce the flexibility and would include unused files in the search.
Basically we want to be able to parse all XAML files in a very large Prism project (independent of loading them) to identify all localisation strings. This will let us build up an initial localisation database that includes all our resource-binding strings and also create a lookup of which XAML files they occur in (to make editing easy for translators).
Why do this?: The worst thing for translators is to change a string in one context only to find it was used elsewhere with slightly different meaning. We are enabling in-context editing of translations from within the application itself.
Update (14 Sep):
The standard method for iterating assemblies is not available to Silverlight due to security restrictions. This means the only improvement to the solution below would be to cooperate with the Prism module management if possible. If anyone wants to provide a code solution for that last part of this problem there are points available to share with you!
Follow-up:
Iterating content of XAP files in a module-base project seems like a really handy thing to be able to do for various reasons, so putting up another 100 rep to get a real answer (preferably working example code). Cheers and good luck!
Partial solution below (working but not optimal):
Below is the code I have come up with, which is a paste together of techniques from this link on Embedded resources (as suggested by Otaku) and my own iterating of the Prism Module Catalogue.
Problem 1 - all the modules are
already loaded so this is basically
having to download them all a second
time as I can't work out how to
iterate all currently loaded Prism modules.
If anyone wants to share the bounty
on this one, you still can help make
this a complete solution!
Problem 2 - There is apparently a bug
in the ResourceManager that requires
you to get the stream of a known
resource before it will let you
iterate all resource items (see note in the code below). This means I have to have a dummy resource file in every module. It would be nice to know why that initial GetStream call is required (or how to avoid it).
private void ParseAllXamlInAllModules()
{
IModuleCatalog mm = this.UnityContainer.Resolve<IModuleCatalog>();
foreach (var module in mm.Modules)
{
string xap = module.Ref;
WebClient wc = new WebClient();
wc.OpenReadCompleted += (s, args) =>
{
if (args.Error == null)
{
var resourceInfo = new StreamResourceInfo(args.Result, null);
var file = new Uri("AppManifest.xaml", UriKind.Relative);
var stream = System.Windows.Application.GetResourceStream(resourceInfo, file);
XmlReader reader = XmlReader.Create(stream.Stream);
var parts = new AssemblyPartCollection();
if (reader.Read())
{
reader.ReadStartElement();
if (reader.ReadToNextSibling("Deployment.Parts"))
{
while (reader.ReadToFollowing("AssemblyPart"))
{
parts.Add(new AssemblyPart() { Source = reader.GetAttribute("Source") });
}
}
}
foreach (var part in parts)
{
var info = new StreamResourceInfo(args.Result, null);
Assembly assy = part.Load(System.Windows.Application.GetResourceStream(info, new Uri(part.Source, UriKind.Relative)).Stream);
// Get embedded resource names
string[] resources = assy.GetManifestResourceNames();
foreach (var resource in resources)
{
if (!resource.Contains("DummyResource.xaml"))
{
// to get the actual values - create the table
var table = new Dictionary<string, Stream>();
// All resources have “.resources” in the name – so remove it
var rm = new ResourceManager(resource.Replace(".resources", String.Empty), assy);
// Seems like some issue here, but without getting any real stream next statement doesn't work....
var dummy = rm.GetStream("DummyResource.xaml");
var rs = rm.GetResourceSet(Thread.CurrentThread.CurrentUICulture, false, true);
IDictionaryEnumerator enumerator = rs.GetEnumerator();
while (enumerator.MoveNext())
{
if (enumerator.Key.ToString().EndsWith(".xaml"))
{
table.Add(enumerator.Key.ToString(), enumerator.Value as Stream);
}
}
foreach (var xaml in table)
{
TextReader xamlreader = new StreamReader(xaml.Value);
string content = xamlreader.ReadToEnd();
{
// This is where I do the actual work on the XAML content
}
}
}
}
}
}
};
// Do the actual read to trigger the above callback code
wc.OpenReadAsync(new Uri(xap, UriKind.RelativeOrAbsolute));
}
}
Use GetManifestResourceNames reflection and parse from there to get only those ending with .xaml. Here's an example of using GetManifestResourceNames: Enumerating embedded resources. Although the sample is showing how to do this with a seperate .xap, you can do this with the loaded one.
I've seen people complain about some pretty gross bugs in Prism
Disecting your problems:
Problem 1: I am not familiar with Prism but from an object-oriented perspective your Module Manager class should keep track of whether a Module has been loaded and if not already loaded allow you to recursively load other Modules using a map function on the List<Module> or whatever type Prism uses to represent assemblies abstractly. In short, have your Module Manager implement a hidden state that represents the List of Modules loaded. Your Map function should then take that List of Modules already loaded as a seed value, and give back the List of Modules that haven't been loaded. You can then either internalize the logic for a public LoadAllModules method or allow someone to iterate a public List<UnloadedModule> where UnloadedModule : Module and let them choose what to load. I would not recommend exposing both methods simultaneously due to concurrency concerns when the Module Manager is accessed via multiple threads.
Problem 2: The initial GetStream call is required because ResourceManager lazily evaluates the resources. Intuitively, my guess is the reason for this is that satellite assemblies can contain multiple locale-specific modules, and if all of these modules were loaded into memory at once it could exhaust the heap, and the fact these are unmanaged resources. You can look at the code using RedGate's .NET Reflector to determine the details. There might be a cheaper method you can call than GetStream. You might also be able to trigger it to load the assembly by tricking it by loading a resource that is in every Silverlight assembly. Try ResourceManager.GetObject("TOOLBAR_ICON") or maybe ResourceManager.GetStream("TOOLBAR_ICON") -- Note that I have not tried this and am typing this suggestion as I am about to leave for the day. My rationale for it being consistently faster than your SomeDummy.Xaml
approach is that I believe TOOLBAR_ICON is hardwired to be the zeroth resource in every assembly. Thus it will be read very early in the Stream. Faaaaaast. So it is not just avoiding needing SomeDummy.Xaml in every assembly of your project that I am suggesting; I am also recommending micro-optimizations.
If these tricks work, you should be able to significantly improve performance.
Additional thoughts:
I think you can clean up your code further.
IModuleCatalog mm = this.UnityContainer.Resolve<IModuleCatalog>();
foreach (var module in mm.Modules)
{
could be refactored to remove the reference to UnityContainer. In addition, IModuleCatalog would be instantiated via a wrapper around the List<Module> I mentioned in my reply to Problem 1. In other words, the IModuleCatalog would be a dynamic view of all loaded modules. I am assuming there is still more performance that can be pulled out of this design, but at least you are no longer dependent on Unity. That will help you better refactor your code later on for more performance gains.

Resources