To resolve jar conflicts in my application, I use the shaded plugin's relocation feature. It works for me, but i feel it is a hack. I would like to understand the downsides of using the relocation feature if any.
The main issue is that it can fail in cases where the package name is directly defined (not just imported). For example, if you are using reflection and instantiating the a class by its name (including the package name) it will generate the wrong one. Similar problems can occur when the the package is defined in a manifest (there is a transformer for that). See the plugin info for more information.
Another place where this could be an issue is with a third party dependency which uses the same dependency. Consider for example package A which is provided. If package A depends on the relocated package it will in the run time use the provided instance instead of the relocated one. This can lead to unforeseen effects.
An additional issue is that in some cases, the package may contain some initialized/static information (e.g. it downloads some info once or has some big static table). In these cases it is important to understand that there are now TWO completely separate instances of the package.
Related
I just started using flow-typed definitions for my popular libraries in a React Native app such as React Navigation, but I find it quite hard to find the documentation on types and how to use them. I'm still getting errors in my IDE and I feel like Flow is more wasting my time than adding value to my developer experience because I've to lookup for the types all the time (and sometimes don't even find an answer). Any advice about that ?
A complex web application using many npm modules is very rarely going to be strongly typed throughout. The goal of strong typing in JS is largely to have as much typing as is feasible or even reasonable. Modules which do not have libdefs will come in as any and that's okay. Obviously it would be great if everything you pulled in had full types, but just given the way that progress is made this is practically impossible. Add to this the fact that the simple act of upgrading flow will often introduce more caught errors to your codebase, and you end up having to accept that typing is a progressive process, it shouldn't really be a blocking one.
Now that that's out of the way, you seem to have a number of different sub-questions:
I've to lookup for the types all the time
Not entirely sure what you mean by this, but you might saying it's hard to find types for the package you're using. Make sure you're familiar with how the flow-typed CLI tool works (npx flow-typed), it will help you with searching for and installing compatible libdefs. If you don't find anything for a module in flow-typed then poke around the source github repo and make sure flow types aren't shipped with the module itself. If you come across a package with a .d.ts (TypeScript) file, try converting it to a libdef with flowgen. If nothing proves fruitful, you should probably just forego types and carry on.
In this case, I would actually start my own libdef (npx flow-typed create-stub <package name>) and fill in some basic types as I went. You can start really simple, I have a libdef currently for react-select that only checks one prop of the component, the options prop (I don't remember why I have this, however :P). Again, progressive typing is the goal. Checking that one prop is actually really nice compared to checking none.
I find it quite hard to find the documentation on types and how to use them
There's generally not real documentation for libdefs in flow-typed unless it's written by the package author somewhere. I usually read the libdefs themselves, but if you find usage confusing I would recommend looking at the tests associated with the libdef. You can also dig through any relevant issues or PRs to find usage examples.
sometimes don't even find an answer
Add a $FlowFixMe and come back to it later if it slows you down too much. All of these things will become much more manageable as you become more accustomed to flow and strong typing in general, and both flow and libdefs are constantly improving.
I'm still getting errors in my IDE
If you can't fix them, add a $FlowFixMe and come back later. Flow actually has a tool included in its source code that has a utility for adding $FlowFixMe for every error, but as it's not currently shipped to npm you have to clone the source to use it.
Given that Composer can work with virtual packages in provide section of composer.json file, how are the versions managed of those virtual packages if no one holds the responsibility?
I mean if anyone can create "evil" package stating it provide-s specific virtual package (for which there is no repo anywhere), than they can specify whatever version they like. This could perhaps collide with other "good" packages, right?
From my experience, this "virtual package" feature hasn't been widely used, and it definitely has its drawbacks due to the way it is currently implemented.
If you take a look at this search result on Packagist, you'll see the three top packages being psr/log (the real interface package), psr/log (a virtual package used by two other real packages, but in the wrong way) and psr/log-implementation (used by plenty of packages, including monolog/monolog).
This example illustrates that people will misunderstand this feature and do the wrong thing. You cannot provide psr/log because that is a real package that has a real interface definition. It makes even less sense to require psr/log and at the same time also provide it.
You also correctly spotted that there is no central entity that decides which versions of a virtual package should exist, let alone which virtual package names should exist. Which isn't that much of a problem because deciding on the names of real packages is done the same way: One developer thinks of a name, and that's all. It's unlikely that this procedure creates involuntary conflicts because usually the Github account name is used as the vendor name, and Github has already made them unique. Malicious conflicts don't really exist in the real world, as Jordi has pointed out in his blog, due to the general structure Composer uses to name packages.
Back to the virtual package feature: There are two blog postings discussing it.
The first explains using this feature with the example of the virtual psr/log-implementation package mentioned above. I won't replicate that tutorial-like text here.
The second (linked and replied to at the end of the first) discusses what's bad about the whole approach of virtual packages.
Some of the points:
1) Strictly speaking (as in, would the code compile), the code from the library itself doesn't need a package that provides psr/log-implementation. It just needs the LoggerInterface (which happens to be in the psr/log package).
4) psr/log-implementation, or any virtual package for that matter, is very problematic as a package. There is no definition of this virtual package. [...]
5) Some day, someone may decide to introduce another virtual package, called the-real-psr/log-implementation (they can easily do that, right?). Such packages may be completely exchangeable with existing psr/log-implementation packages, but in order to be useful, every existing PSR-3 compliant logger package needs to mention that virtual package too in their provide section. [...]
With all these problems and uncertainties existing for good packages, it is no surprise that they do not really use this feature very much.
However, you cannot abuse it for bad things in the way you outline it in your question. Yes, any package can declare that it provides any other package. But just like it happened with psr/log, having a package declaring that it provides another package will not magically make everyone download that package. The way it works is a package declares that it provides another package, and by requiring this package, the virtual package also gets included and will fulfill any dependencies of other packages onto the virtual package.
But not requiring the package providing stuff will leave everything in it out of the equation.
In order to include bad software someone has to require it. This is best done as an indirect dependency of an innocent looking library, and requires the help of an unsuspecting developer that actively pulls this code without properly reviewing it.
Which probably is my central point for everything: If you pull in someones code into your own project, make sure you understand what that code does by reviewing it (which isn't only about malicious things, but also basic code quality, because some day you may be forced to debug a problem), or make sure you can trust that source enough to not do bad things to you. However, your own code base is not affected by packages you do not require (the last bug with such an effect was handling replace information, but I don't find that issue right now).
I am in the process of migrating an existing Adobe anlytics implementation on s_code version 27.5 to DTM. The first step of the migration and what is in scope of the project is a pick up and shift job of the current s_code into AdobeDTM.
The site has multiple JS files that house functions that need the 's object' to be initialised to work, however s is being initialised in the s_code contents after most of these JS functions have run so is throwing errors for 'S is not defined'. It is not being initialised globally as it would be in a standard implementation.
Is there a way I can initialise 's' in the DTM satellite library globally. I have tried to add var = s{}; a page load rule under third party/custom tags area but only having intermittent luck with it, where sometimes getting errors thrown.
Any support/insight into this issue would be most appreciated.
Thanks!
Step 1: Change the Code Configuration to Custom
Note: If you migrated your legacy H code to DTM as a tool, then you should already be familiar with and already done this step, since DTM does not allow you to specify legacy H code for the "Managed by Adobe" option.
In the Library Management section of the Adobe Analytics tool, change the Code Configuration option to Custom, and Code Hosted to In DTM.
If you are using the legacy H code library, then you must also check the "Set report suites using custom code below" option. If part of your migration to DTM was to move to AppMeasurement library, checking this option is optional, depending on how you want to handle report suite routing.
Then, click the Open Editor button to open the editor. You should see the Adobe Analytics library in the code box. If you are migrating legacy H code, then remove everything currently in the box and add your legacy H code library (which you should have already done based on the question).
Step 2: Instantiate the s object
If you are using the legacy H code, then add the following line to the top of the code box, above the library:
window.s = s_gi("[report suite id(s)]");
You will need to replace [report suite id(s)] with the report suite id(s) you want to send the data to. s_gi() requires a value to be passed to it, which is why you must check the checkbox above.
If you are using AppMeasurement library, then add the following line to the top of the code box, above the library:
window.s = new AppMeasurement("[report suite id(s)]");
If you checked the "Set report suites using custom code below" checkbox, then specify the report suite(s). If you did not check it, then do not pass anything to Appmeasurement(). Alternatively, you can pass nothing, but also add the following underneath it:
s.account="[report suite id(s)]";
Note however in step 3 you will be setting it in doPlugins anyway so you don't really need this here (I just added this sidenote for other readers who may be migrating AppMeasurement s_code.js to DTM).
Note: Aside from the window.s part, you should already be familiar with this line of code, and already have logic for populating report suite(s), coming from a legacy implementation. Specifically, you may be using the dynamicAccountXXX variables. If you are upgrading to AppMeasurement library, then you will need to write your own logic to simulate that, since AppMeasurement (for reasons unclear to anybody) does not have this functionality.
Step 3: Setting report suite(s) after page load
One of the many caveats about implementing Adobe Analytics as a tool is that DTM (for reasons unclear to anybody) creates a new s object whenever an event based or direct call rule is triggered and AA is set to trigger. In practice, this means almost all of the variables you set within the custom code boxes in the tool config will not be carried over to subsequent AA calls on a page - report suite(s) being one of them.
What DTM does for report suite is set it to the specified Production Report Suite(s) if DTM is in production mode, or Staging Report Suite(s) if in staging mode. Even if you enabled the "Set report suites using custom code below" option!
To get around this, you will need to include doPlugins function (and usePlugins) in one of the tool's custom code boxes if you don't already have it included (you almost certainly do, coming from a legacy implementation), and you will need to also assign the report suite(s) within it (doPlugins and usePlugins do get carried over now).
For legacy H library, within doPlugins, add the following:
s.sa("[report suite id(s)]");
Note: setting dynamicAccountXXX variables within doPlugins will not work. You will need to write your own logic for passing the report suite(s) to s.sa()
For AppMeasurement library, within doPlugins, add the following:
s.account="[report suite id(s)]";
General Notes:
In the Library Management section, setting Load library at
Page Top will load the library synchronously at the position where
you put your DTM Header tag, which is the earliest you can trigger it
through DTM. However, this is not a guarantee the library will be
loaded before your other scripts that referenced it are executed
(e.g., references to s object in some other script tag above the
DTM Header script will continue to give you a reference error).
If you are indeed still using the legacy H library, then I would
recommend your highest priority be to migrate to AppMeasurement
library. Even higher priority than migrating the code to DTM, IMO.
While I echo Mark's sentiments about implementing AA code as a 3rd
party tag in general, the sad truth is in practice, it may still be
your best option at the moment, depending on your exact scenario. DTM
currently has too many caveats, short-comings, and outright bugs that
make it impossible to implement AA as a tool in DTM, depending on
your exact implementation requirements. Particularly when it comes to
making AA integrate with certain common 3rd party tools, and even
some of Adobe's other tools!
You will be better off if you migrate completely to DTM for analytics deployment rather than trying to reference the s object from legacy H page code.
If migrating completely from H-code to DTM is an option, I would do the following:
Remove all H page code and any references to s_code
Remove all calls to s.t or s.tl on links or pages
Deploy DTM Header / Footer code on all pages
Within DTM, Add the Adobe Analytics Tool
Within DTM, Add the Adobe Marketing Cloud ID Service
Within DTM and the "Custom Page Code" of Adobe Analytics tool, create the "do_plugins" section and add any custom plugins from the H-code.
Following these steps will allow the s object to be created within DTM and allow for all other rules to use it correctly.
What I would not do:
Deploy H-code (s_code) as a third-party script and try and reference the s object outside of the Adobe Analytics tool. This is not efficient and doesn't allow you to get the best practices from DTM, IMO.
Mark
One of the issues noticed using DTM to implement Adobe Analytics was with the S-Object being undefined.enter image description here
Reasons very much unclear.You have a workaround that I used by reminding DTM to set the S object again. In-cases where DTM does not recognizes what needs to be done.
var s = _satellite.getToolsByType('sc')[0].getS();
For my Implementation we had used a Third Party JavaScript that set within a Direct call rule and within which the above code was set.
The solution worked great ....
I don't even know how to describe this. I have a WPF project that I've added some libraries to. Libraries I've used in many other projects before. I have the strange issue of, when typing out code, intellisense can fill in things from a library fine, but as soon as I do a build, VS acts like all of these things are undeclared. Import statements suddenly say that I'm trying to reference things that don't exist, etc. But then if I clean the build, all of the references come back fine.
I'm completely stumped, any thoughts?
I have seen this if you are targeting the client profile, but some of the DLLs require the full .net framework.
This can happen if you are using file based references to libraries ($ref) that have corresponding projects in the same solution as the one you are adding references to ($proj).
Visual Studio is unable to (reliably) understand the build order and builds the items out of sequence (the $proj is built before the $ref, but after the $ref's output has been cleaned).
If you have this situation, just change the references to project based references.
Similarly, make sure there are no build events that would alter or move files.
Also, VS will sometimes search for a reference and pick a file at a location that you do not expect. Highlight the reference and check its property page, and make sure its actually where you think it is.
We currently use an SVN repository to ensure everyone's local environments are kept up-to-date. However, Drupal website development is somewhat trickier in that any custom code you write (for instance, PHP code written for a node body) is stored in the DB and the changes aren't recognized by the SVN working copy.
There are a couple of developers who are presently working on the same area of a Drupal site, but we're uncertain about how to best merge our local Drupal database changes together. Committing patches of database dumps seem clumsy at best and is most likely inefficient and error-prone for this purpose.
Any suggestions about how to approach this issue is appreciated!
Unfortunately, database deployment/update is one of Drupals weak spots. See this question & answers as well as this one for some suggestions on how to deal with it.
As for CCK, you could find some hints here.
As for php code in content, I agree with googletorp in that you should avoid doing this. However, if for some reason you absolutely have to do it, you could try to reduce the code to a simple function call. Thus you'd have the function itself in a module (and this would be tracked via SVN). But then you are only a little step from removing the need for the inline code anyways ...
If you are putting php code into your database then you are doing it wrong. Some stuff are inside the database like views and cck fields plus some settings. But if you put php code inside the node body you are creating a big code maintenance problem. You should really use the API and hooks instead. Create modules instead of ugly hacks with eval etc.
All that has been said above is true and good advice.. To answer your practical question, there are a number of recent modules that you could use to transport the changes done by the various developers.
The "Features" modules is a cure the the described issue of Drupal often providing nice features, albeit storing lots of configs and structure in the DB. This module enables you to capture a feature and output it as a pseudo-module (qualifies as a module with .info and code-files and all). Here is how it works:
Select functionality/feature to export
The module analyses the modules, files, DB content that is required to rebuild that feature elsewhere
The module creates a pseudo-module that contains the instructions in #3 and outputs everything (even SQL to rebuild the stuff in the DB) into a module package (as well as sets dependencies for other modules required)
Install the pseudo-module on your new site and enable it
The pseudo-module replicates the feature you exported rebuilding DB data and all
And you can tell your boss you did it all manually with razor focus to avoid even 1 error ;)
I hope this helps - http://drupal.org/project/features
By committing patches of database dumps, do you mean taking an entire extract of the db and committing it after each change?
How about a master copy of the database? Extract all tables, views, sps, etc... into individual files, put them into svn and do your merge edits on the individual objects?