How to get information about libraries in qooxdoo? - qooxdoo

I want to get info about libraries listed in compile.json (libraries section) during runtime. How could I do it?
What sort of information I would like to get is meta information about themes used in libraries.

There's two things here: firstly meta data (which you would have to serve up to your client manually) and secondly, runtime information which is compiled in.
Meta Data
This data is used by the API viewer https://github.com/qooxdoo/qxl.apiviewer
There's two sets of meta data - firstly, when you compile with qx compile, the compiler will output a db.json in the root of the target's output (eg ./compiled/source/db.json) - this contains information for each class, for every application, including dependency information.
The snag here is that if you add an remove dependencies, there could be classes listed in db.json which are no longer used, and if you have several applications you will need to walk the dependency tree to find out which classes are used by which application.
The second set of data is that every class is transpiled into the compiled/source/transpiled/ directory, and has a .js, .mpa, and .json file - so for qx.core.Object you will have ./compiled/source/transpiled/qx/core/Object.{js,map,json}
The class' .json file contains loads of information about superclass, interfaces, mixins, members, and properties - and for each one, records the parsed JSDoc as well as where in the original source file it occurs.
Runtime Information
There is a global environment variable called qx.libraryInfoMap that lists all the information about all of the libraries which are compiled in - the contents comes from the library's Manifest.json
There is also qx.$$libraries which is a global variable which gives URIs
Safety
None of the data above is considered documented and immune from changes in structure - technically it is internal data structures, but it's reasonably unlikely to change and using it is supported.
Because of the impact on other peoples code as well as our internal tools such as the API viewer, we would try to not change the structure, but it is not guaranteed and a minor version bump could (potentially) make a change in the structure.

Related

Is it possible to manipulate pdf files in Visual Basic without an external library/SDK?

I am looking at how to implement PDF merging with raw VB code so that the code may be invoked by a bot for business process automation.
The software used to create the bot provides a function to invoke VB code, but I don't believe it can access any externally imported libraries because it expects plain source, so I essentially need to produce code that one could run in a VB shell environment without anything fancy (or convenient, it seems).
All the research I've done so far point me in the direction of external packages I would need to install, such as iText; this is what I'm looking to avoid.
(previous iText employee here)
PDF is not an easy (binary) format.
Essentially, blobs of information (text that has to be rendered, fonts, images, vector graphics, etc) are compressed and gathered into objects.
Each object gets a number. Objects are allowed to reference eachother (a piece of text might say 'I want to be rendered with font 4433')
All object numbers and their byte offset in the file are gathered in the crossreference (often called XREF) table.
A PDF includes a 'Pages' dictionary object that tells the viewer which objects belong on which page.
In order to merge PDF files, you would need to:
- read all XREF tables of all files
- adjust all of those to the correct byte offset
- update various dictionary objects within the PDF file that tell it where all the objects per page are kept
This is by no means a trivial task, but it can be done using only VB.
If you are serious about implementing a robust, scalable version of this of tool, perhaps it's better to look at the iText sourcecode and try to port it to VB?

Derived objects in Clearcase

I want to ask what is exactly the derived object in ClearCase and how is work.
Additional i want to ask if there is an other program with the same function, because in Git, MKS or in IBM® Rational Team Concert™ i cannot find something similar, is it obsolete ?
This is quite linked to dynamic views, which are very specific to ClearCase and not found in other more recent VCS.
See "ClearCase Build Concepts"
Developers perform builds, along with all other work related to ClearCase, in views. Typically, developers work in separate, private views. Sometimes, a team shares a single view (for example, during a software integration period).
As described in Developing Software, each view provides a complete environment for building software that includes a particular configuration of source versions and a private work area in which you can modify source files, and use build tools to create object modules, executables, and so on.
As a build environment, each view is partially isolated from other views. Building software in one view never disturbs the work in another view, even another build of the same program at the same time. However, when working in a dynamic view, you can examine and benefit from work done previously in another dynamic view. A new build shares files created by previous builds, when appropriate. This sharing saves the time and disk space involved in building new objects that duplicate existing ones.
You can (but need not) determine what other builds have taken place in a directory, across all dynamic views. ClearCase includes tools for listing and comparing past builds.
The key to this scheme is that the project team's VOBs constitute a globally accessible repository for files created by builds, in the same way that they provide a repository for the source files that go into builds.
A file produced by a software build is a derived object (DO). Associated with each derived object is a configuration record (CR), which clearmake or omake uses during subsequent builds to determine whether the DO can be reused or shared.
A derived object (DO) is a file created in a VOB during a build or build audit with clearmake or omake.
Each DO has an associated configuration record (CR), which is the bill of materials for the DO. The CR documents aspects of the build environment, the assembly procedure for a DO, and all the files involved in the creation of the DO.
The build tool attempts to avoid rebuilding derived objects.
If an appropriate derived object exists in the view, clearmake or omake reuses that DO.
If there is no appropriate DO in the view, clearmake or omake looks for an existing DO built in another view that can be winked in to the current view.
The search process is called shopping.
This is relevant for very large C or C++ makefile-based projects.
I think the TL;DR version of this is:
Derived objects contain information that describes
What was accessed to build the object, including dependencies that may bot be in your build files.
Other files created during the build process ("Sibling Derived Objects")
The commands used to build the object (The "build script") assuming that clearmake, omake, or the ANT listener were used to run the build.
In the case of clearmake and omake, this information is used to avoid rebuilds, potentially speeding builds. The lookup is referred to DO "shopping" and the build avoidance is "winkin."
If you have regulatory or security compliance or needs where this level of auditing is critical, there really isn't anything else that does this.

How to keep multiple versions of an artifact distinguished by properties in Artifactory?

is it possible to somehow keep different versions of the same file in Artifactory, where the versions are differentiated by their properties?
For example, let's have a file foo.
I upload the file to Artifactory via the REST API and set ver=1 property.
The file changes, I upload it again, this time it has ver=2 property.
Later I try to access the ver=1 file, but get a 404 error.
I understand that Artifactory keeps different versions of Artifacts which are associated with different builds. However there is no build info other than the "custom property" for the files I am uploading. How can I version them?
You must make sure that each artifact is also deployed with a unique path/file name. It is not enough to have a different set of properties.
Usually the best way to version the file will be having the version number as part of the file name and possibly also as part of the path.

Cross-contamination between AngularJS tests

We have a configuration file that we use in our AngularJS app. Because we need our configuration information during the build phase, we define it as a value. The configuration file contains information about where to find the assets for one of several Assessments, so we have a configurationService with an updateAssessment() function that looks in the configuration file at the various Assessments that are defined and then copies the properties from the specific Assessment into another value, assessmentSettings.
We have some situations where we want to read in some additional settings from an XML files, but when those settings are already provided as part of the configuration for that Assessment, we want to ignore them. I have a test that checks that this is done and it runs and passes.
However, massaging the project configuration value, then calling configurationService.updateAssessment(1) in that test causes 41 other tests in different files to fail. My understanding is that Angular should be torn down and brought back up for each test and should certainly not cross-contaminate across different files. Is there something different about values that would cause this to happen?
Note that the project itself seems to load and run fine. I haven't provided code examples because it would be a fair amount of code and I don't think it would be that enlightening. Angular 1.3.

how to include XSD schema files in Silverlight library?

Within a Silverlight library, I need to validate incoming XML against a schema. The schema is composed of 5 interdependent .xsd files; the main file uses "xs:import" to reference all of the others, and there are other references among them.
Assuming that the .xsd files need to be distributed with the library (i.e. not hosted on some well-known external URL), how should I structure my project to include them?
I have found that I can embed them in the library project with build type "Resource" and then load them (individually) using Application.GetResourceStream() and a relative URI with the ";content" flag in it. But if I take this approach, can I validate against the interdependent set of 5 files? What happens when the schema parser tries to resolve the interdependencies?
Or should I embed them with build type "Content" and access the main one with some other sort of URL?
Or???
To summarize: how should I use these 5 .xsd files in my project so that I will be able to validate XML against them?
EDIT: It's not clear whether it's even possible to validate in Silverlight. I spun off a related question.
I cannot say much about Silverlight limitations with respect to validation, but the question itself is more generic - one might want to store .xsd files as resources in a desktop .NET application, for example - so I will answer that part.
You can have full control over resolution of URIs in xs:import by means of XmlSchemaSet.XmlResolver property. Just create your own subclass of XmlResolver, override GetEntity() method, and implement it using GetResourceStream(), or GetManifestResourceStream(), or whichever other way you prefer.

Resources