I am in the process of migrating an existing Adobe anlytics implementation on s_code version 27.5 to DTM. The first step of the migration and what is in scope of the project is a pick up and shift job of the current s_code into AdobeDTM.
The site has multiple JS files that house functions that need the 's object' to be initialised to work, however s is being initialised in the s_code contents after most of these JS functions have run so is throwing errors for 'S is not defined'. It is not being initialised globally as it would be in a standard implementation.
Is there a way I can initialise 's' in the DTM satellite library globally. I have tried to add var = s{}; a page load rule under third party/custom tags area but only having intermittent luck with it, where sometimes getting errors thrown.
Any support/insight into this issue would be most appreciated.
Thanks!
Step 1: Change the Code Configuration to Custom
Note: If you migrated your legacy H code to DTM as a tool, then you should already be familiar with and already done this step, since DTM does not allow you to specify legacy H code for the "Managed by Adobe" option.
In the Library Management section of the Adobe Analytics tool, change the Code Configuration option to Custom, and Code Hosted to In DTM.
If you are using the legacy H code library, then you must also check the "Set report suites using custom code below" option. If part of your migration to DTM was to move to AppMeasurement library, checking this option is optional, depending on how you want to handle report suite routing.
Then, click the Open Editor button to open the editor. You should see the Adobe Analytics library in the code box. If you are migrating legacy H code, then remove everything currently in the box and add your legacy H code library (which you should have already done based on the question).
Step 2: Instantiate the s object
If you are using the legacy H code, then add the following line to the top of the code box, above the library:
window.s = s_gi("[report suite id(s)]");
You will need to replace [report suite id(s)] with the report suite id(s) you want to send the data to. s_gi() requires a value to be passed to it, which is why you must check the checkbox above.
If you are using AppMeasurement library, then add the following line to the top of the code box, above the library:
window.s = new AppMeasurement("[report suite id(s)]");
If you checked the "Set report suites using custom code below" checkbox, then specify the report suite(s). If you did not check it, then do not pass anything to Appmeasurement(). Alternatively, you can pass nothing, but also add the following underneath it:
s.account="[report suite id(s)]";
Note however in step 3 you will be setting it in doPlugins anyway so you don't really need this here (I just added this sidenote for other readers who may be migrating AppMeasurement s_code.js to DTM).
Note: Aside from the window.s part, you should already be familiar with this line of code, and already have logic for populating report suite(s), coming from a legacy implementation. Specifically, you may be using the dynamicAccountXXX variables. If you are upgrading to AppMeasurement library, then you will need to write your own logic to simulate that, since AppMeasurement (for reasons unclear to anybody) does not have this functionality.
Step 3: Setting report suite(s) after page load
One of the many caveats about implementing Adobe Analytics as a tool is that DTM (for reasons unclear to anybody) creates a new s object whenever an event based or direct call rule is triggered and AA is set to trigger. In practice, this means almost all of the variables you set within the custom code boxes in the tool config will not be carried over to subsequent AA calls on a page - report suite(s) being one of them.
What DTM does for report suite is set it to the specified Production Report Suite(s) if DTM is in production mode, or Staging Report Suite(s) if in staging mode. Even if you enabled the "Set report suites using custom code below" option!
To get around this, you will need to include doPlugins function (and usePlugins) in one of the tool's custom code boxes if you don't already have it included (you almost certainly do, coming from a legacy implementation), and you will need to also assign the report suite(s) within it (doPlugins and usePlugins do get carried over now).
For legacy H library, within doPlugins, add the following:
s.sa("[report suite id(s)]");
Note: setting dynamicAccountXXX variables within doPlugins will not work. You will need to write your own logic for passing the report suite(s) to s.sa()
For AppMeasurement library, within doPlugins, add the following:
s.account="[report suite id(s)]";
General Notes:
In the Library Management section, setting Load library at
Page Top will load the library synchronously at the position where
you put your DTM Header tag, which is the earliest you can trigger it
through DTM. However, this is not a guarantee the library will be
loaded before your other scripts that referenced it are executed
(e.g., references to s object in some other script tag above the
DTM Header script will continue to give you a reference error).
If you are indeed still using the legacy H library, then I would
recommend your highest priority be to migrate to AppMeasurement
library. Even higher priority than migrating the code to DTM, IMO.
While I echo Mark's sentiments about implementing AA code as a 3rd
party tag in general, the sad truth is in practice, it may still be
your best option at the moment, depending on your exact scenario. DTM
currently has too many caveats, short-comings, and outright bugs that
make it impossible to implement AA as a tool in DTM, depending on
your exact implementation requirements. Particularly when it comes to
making AA integrate with certain common 3rd party tools, and even
some of Adobe's other tools!
You will be better off if you migrate completely to DTM for analytics deployment rather than trying to reference the s object from legacy H page code.
If migrating completely from H-code to DTM is an option, I would do the following:
Remove all H page code and any references to s_code
Remove all calls to s.t or s.tl on links or pages
Deploy DTM Header / Footer code on all pages
Within DTM, Add the Adobe Analytics Tool
Within DTM, Add the Adobe Marketing Cloud ID Service
Within DTM and the "Custom Page Code" of Adobe Analytics tool, create the "do_plugins" section and add any custom plugins from the H-code.
Following these steps will allow the s object to be created within DTM and allow for all other rules to use it correctly.
What I would not do:
Deploy H-code (s_code) as a third-party script and try and reference the s object outside of the Adobe Analytics tool. This is not efficient and doesn't allow you to get the best practices from DTM, IMO.
Mark
One of the issues noticed using DTM to implement Adobe Analytics was with the S-Object being undefined.enter image description here
Reasons very much unclear.You have a workaround that I used by reminding DTM to set the S object again. In-cases where DTM does not recognizes what needs to be done.
var s = _satellite.getToolsByType('sc')[0].getS();
For my Implementation we had used a Third Party JavaScript that set within a Direct call rule and within which the above code was set.
The solution worked great ....
Related
We're designing a system for a client where they are allowing authenticated users to upload images. We've created an API to upload the files but the client only wants the latest file and delete all previous ones so that there would only ever be one.
We've looked through the docs and can't come across a way for ADAM to handle this in both 2SXC and DNN's file system.
Internally when deleting images we see API calls like the following to the internal 2SXC API, but we're wondering if this is exposed somewhere within the public API?
https://somedomain.com/api/2sxc/app/auto/data/61393528-b401-411f-a001-f423ea46700a/b7d04e2c-c565-496c-8efb-aa133cf90d33/Photo/delete?subfolder=&isFolder=false&id=189&usePortalRoot=false&appId=3
We could probably use the same endpoint above, but we'd likely run into permission issues or changes to the APIs that could be problematic.
Thank you for any advice you can give! Perhaps #iJungleBoy can provide some thoughts on this.
As a solution from a completely different direction, if you are on the later release of 2sxc (v12.8+, v13+), and comfortable programming in C#, you might consider doing this as a "cleanup" from a Dnn Scheduled Task. This can be done with a relatively easy setup. We have a Gist in place that we use as a starter. You simply put the code in the /App_Code folder then setup a normal Dnn Scheduled Task. NOTE that you can scroll down to the first comment on the Gist to see a screenshot of a complete working setup.
Accuraty's AccuTasks template on GitHub Gists
There are two more key things to note:
You need to install Dnn's CodeDom 3.6 because the example uses the later versions C#'s string interpolation - OR remove the few $"ASL2021 - {this.GetType().Name}, Task Scheduled Email", bits or convert to string.Format() or something.
Since your task's code is NOT running in a (2sxc) module, if needed, you'll do stuff like this: 2sxc Docs - Use 2sxc Instance or App Data from External C# Code
So, if you are comfortable writing code that "finds and deletes stuff older than NN days" - this might be the way to go.
I would like to (programmatically) convert a text file with questions to a Google form. I want to specify the questions and the questiontypes and their options. Example: the questiontype scale should go from 1 to 7 and should have the label 'not important' for 1 and 'very important' for 7.
I was looking into the Google Spreadsheet API but did not see a solution.
(The Google form API at http://code.lancepollard.com/introducing-the-google-form-api is not an answer to this question)
Google released API for this: https://developers.google.com/apps-script/reference/forms/
This service allows scripts to create, access, and modify Google Forms.
Until Google satisfies this feature request (star the feature on Google's site if you want to vote for it), you could try a non-API approach.
iMacros allows you to record, modify and play back macros that control your web browser. My experiments with Google Drive showed that the basic version (without DirectScreen technology) doesn't record macros properly. I tried it with both the plugin for IE (basic and advanced click mode) and Chrome (the latter has limited iMacro support). FYI, I was able to get iMacros IE plug-in to create questions on mentimeter.com, but the macro recorder gets some input fields wrong (which requires hacking of the macro, double-checking the ATTR= of the TAG commands with the 'Inspect element' feature of Chrome, for example).
Assuming that you can get the TAG commands to produce clicks in the right places in Google Drive, the approach is that you basically write (ideally record) a macro, going through the steps you need to create the form as you would using a browser. Then the macro can be edited (you can use variables in iMacros, get the question/questiontype data from a CSV or user-input dialogs, etc.). Looping in iMacros is crude, however. There's no EOF for a CSV (you basically have to know how many lines are in the file and hard-code the loop in your macro).
There's a way to integrate iMacro calls with VB, etc., but I'm not sure if it's possible with the free versions. There's another angle where you generate code (Javascript) from a macro, and then modify it from there.
Of course, all of these things are more fragile than an API approach long-term. Google could change its presentation layer and it will break your macros.
Seems like Apps Script now has a REST API and SDK's for it. Through Apps Script you can generate Google Forms. This API was really hard to find by trying to google for it and I haven't yet tested it myself, but I am going to build something with it today (hopefully). So far everything looks good.
EDIT: Seems like the REST API I am using works very well for fully automated usage.
In March(2022) google released REST API for google form. API allows basic crud operation & also added support for registering watches on the form to notify whenever either form is updated or a new response is received.
As of now (March 2016), Google Forms APIs allow us to create forms and store them in Google Drive. However, Forms APIs do not allow one programmatically modify the form (such as modify content, add or delete questions, pre-filled data, etc). In other words, the form is static. In order to serve custom, external APIs are needed.
I work on Labware LIMS, which has both configuration, and customization via its own programming language and internal code editor, and stores this customization code in database records. (Note, not the source code of the actual application itself, just the customization code a.k.a. LIMS Basic.) Almost everything in LIMS is stored in the database.
We want to investigate the possibility of using source control to protect this code but we don't know much more than the theory of using something like Git. (I have worked as a junior QA and used git but not as a dev and my knowledge is limited!)
Of particular use would be the merging tools, as currently we have to manually merge code in a text editor, if we even notice there is a conflict (checking content between dev and live is time consuming and involves using multiple tools, some of which are 3rd party tools we have developed ourselves, which are hit and miss. I personally find it easiest to cut and paste into a text file and then use Beyond Compare.
There is no notification that the code is different when moving it from dev to live (no deployment as such, you just import an xml file) so we often have things going live that someone was working on unbeknownst to each other. I.e. dev 1 is working on the code in object 1, dev 2 gets a ticket to make a change to object 1, does so and puts their change Live, whatever dev 1 was doing is now also Live in whatever state it was in. (Because we don't always have time to thoroughly check what state each object is in between up to 3 different databases.)
Is it possible to use source control just on the code within the database, but not necessarily the database itself? (We have backups and such for that but its easy for some aspects of the system to get overwritten by multiple devs working on overlapping areas at the same time.)
If anyone reading this has any specific knowledge of LW LIMS, we are referring to the Subroutines mostly, we have versioned Analyses which stands in for source control for the moment and is somewhat effective but no way to control who is doing what on the subroutines other than a comment log at the top. I have tried to find any information on how other teams source control their code in LIMS but to no avail.
The structure of one of these tables can range from as simple as the code just existing in one field as a straight text dump with a few other fields such as changed_on, changed_by and name (Subroutines), or more complex with code relating to one record being sprinkled around in multiple rows on another table entirely (Analyses) but even if it could just deal with the simple scenario to start with that would be great!
TL;DR: Could the contents of the Code field in a database record be treated like a regular code object in other dev environments somehow and source controlled using Git? (And is anyone willing to explain it simply for me to follow?)
As you need to version control table fields of subroutine, but LW LIMS doesn’t have the IDE for version control (such as git, svn etc). So the direct answer is no.
If you really want to do version control for the codes in database, you can create a git repository and only put the codes in git repository. when a file has updated, you can commit & push the changes. And it’s easy to compare the difference between versions.
More detail about git, you can refer git book.
LabWare LIMS has a number of options for version control. You COULD version the Subroutine table by adding a SUBROUTINE.VERSION field to the table, this works the same way as other versioned tables in LabWare where it asks you if you would like to create a new version of the object before saving. There are a few customers I work with that have done this.
Alternatively, (and possibly our more recommended method prior to LEM) there is the Snapshot capability where the system automatically takes a "snapshot" of objects as they are saved - when viewing these you have the ability to view them side by side in a comparison dialogue - it will show < or > for lines which are different.
Another approach is, if you have auditing turned on you are able to view the audit history for changes to specific objects - this includes subroutines.
One other approach is to use configuration packages - this has the ability to record version AND build numbers. Though individual subroutines is probably a bit too granular for it's intended design.
Lastly, since this question was originally posted we have developed a product called LabWare Environment Manager (LEM) which has some good change control functionality built-in.
For more information on the suggestions above, please have a look at the LabWare Technical manual for the version you are on. We also have a mailing list for questions like this to be posted. You might find an answer there. If you have access to our Support webpage you're able to search previous questions that have been asked. I'd also suggest that you get in touch with your Account Manager at LabWare who can help you answer some of your questions.
HTH
Can anyone give a high level description of what is going on in the Composite C1 core? In particular I am interested in knowing how the plugin architecture works and what the core components are of the system i.e. when a request arrives what is happening in the architecture. The description doesn't have to be too verbose just a list of steps and the classes involved.
Hopefully one of the core development team would enlighten me... and maybe publish some more API (hint hint more class documentation please).
From request to rendered page
The concrete path a request takes depends on the version of C1 you're using, since it was changed to use Routing in version 2.1.2. So lets see
< 2.1.2
Composite.Core.WebClient.Renderings.RequestInterceptorHttpModule will intercept all incoming requests and figure out if the requested path correspond to a valid C1 page. If it does, the url will be rewritten to the C1 page handler ~/Rendererings/Page.aspx
2.1.1
Composite.Core.Routing.Routes.Register() adds a C1 page route (Composite.Core.Routing.Pages.C1PageRoute) to the Routes-collection that looks at the incoming path, figures out if its a valid C1 page. If it is, it returns an instance of ~/Rendererings/Page.aspx ready to be executed.
Okay, so now we have an instance of a IHttpHandler ready to make up the page to be returned to the client. The actual code for the IHttpHandler is easy to see since its located in ~/Renderers/Page.aspx.cs.
OnPreInit
Here we're figuring out which Page Id and which language that was requested and looking at whether we're in preview mode or not, which datascope etc.
OnInit
Now we're fetching the content from each Content Placeholder of our page, and excuting its functions it may contain. Its done by calling Composite.Core.WebClient.Renderings.Page.PageRenderer.Render passing the current page and our placeholders. Internally it will call the method ExecuteFunctions which will run through the content and recursively resolve C1 function elements (<f:function />), execute them and replace the element with the functions output. This will be done until there are no more function elements in the content in case functions them selves output other functions.
Now the whole content is wrapped in a Asp.Net WebForms control, and inserted into our WebForms page. Since C1 functions can return WebForms controls like UserControl etc., this is necessary for them to work correctly and trigger the Event Lifecycle of WebForms.
And, that's basically it. Rendering of a requested page is very simple and very extendable. For instance is there an extension that enables the usage of MasterPages which simply hooks into this rendering flow very elegantly. And because we're using Routing to map which handler to use, its also possible to forget about ~/Rendering/Page.aspx and just return a MvcHandler if your a Mvc fanatic.
API
Now, when it comes to the more core API's there are many, depending on what you want to do. But you can be pretty sure, no matter what there is the necessary ones to get the job done.
At the deep end we have the Data Layer which most other API's and facades are centered around. This means you can do most things working with the raw data, instead of going through facades all the time. This is possible since most configuration of C1 is done by using its own data layer to store configuration.
The Composite C1 core group have yet to validate/refactor and document all the API's in the system and hence operate with the concept of 'a public API' and what can become an API when the demand is there. The latter is a pretty darn stable API, but without guarantees.
The public API documentation is online at http://api.composite.net/
Functions
Functions is a fundamental part of C1 and is a technique to abstract logic from execution. Basically everything that either performs a action or returns some data/string/values can be candidates for functions. At the lowest level a function is a .Net class implementing the IFunction interface, but luckily there are many easier ways to work with it. Out of the box C1 supports functions defined as XSLT templates, C# methods or Sql. There are also community support for writing functions using Razor or having ASP.Net UserControls (.ascx files) to be functions.
Since all functions are registered in C1 during system startup, we use the Composite.Functions.FunctionFacade to execute whatever function we know the name of. Use the GetFunction to get a reference to a function, and then Execute to execute it and get a return value. Functions can take parameters which are passed as real .Net objects when executing a function. There is also full support for calling functions with Xml markup using the <f:function /> element, meaning that editors, designers, template makers etc. easily can access a wealth of functionality without having to know how to write .Net code.
Read more about functions here http://users.composite.net/C1/Functions.aspx and how to use ie Razor to make functions here http://docs.composite.net/C1/ASP-NET/Razor-Functions.aspx
Globalization and Localization
C1 has full multi-language support in the core. Composite.Core.Localization.LocalizationFacade is used for managing the installed locales in the system; querying, adding and removing. Locales can be whatever CultureInfo object is known by your system.
Composite.Core.ResourceSystem.StringResourceSystemFacade is used for getting strings at runtime that matches the CultureInfo your request is running in. Use this, instead of hardcoding strings on your pages or in your templates.
Read more about Localization here http://docs.composite.net/C1/HTML/C1-Localization.aspx
Global events
Composite.C1Console.Events.GlobalEventSystemFacade is important to know if you need to keep track on when the system is shutting down so you can make last-minute changes. Since C1 is highly multithreaded its easy to write extensions and modules for C1 that are multithreaded as well, taking advantage of multi core systems and parallelization and therefor its also crucial to shut down ones threads in a proper manner. The GlobalEventSystemFacade helps you do that.
Startup events
If you write plug-ins these can have a custom factory. Other code can use the ApplicationStartupAttribute attribute to get called by the Composite C1 core when the web app start up.
Data events
You can subscribe to data add, edit and delete events (pre and post) using the static methods on Composite.Data.DataEvents<T>. To attach to these events when the system start up, use the ApplicationStartupAttribute attribute.
Data
Composite.Core.Threading.ThreadDataManager is important if your accessing the Data Layer outside of a corresponding C1 Page request. This could be a custom handler that just has to feed all newest news as a Rss feed, or your maybe writing a console application. In these cases, always remember to wrap your code that accesses the data like this
using(Composite.Core.Threading.ThreadDataManager.EnsureInitialize())
{
// Code that works with C1 data layer goes here
}
For accessing and manipulating data its recommended NOT to use the DataFacade class, but wrap all code that gets or updates or deletes or adds data like this
using(var data = new DataConnection())
{
// Do things with data
}
IO
When working with files and directories its important to use the C1 equivalent classes Composite.Core.IO.C1File and Composite.Core.IO.C1Directory to .Net's File and Directory. This is due to the nature where C1 can be hosted on Azure, where you might not have access to the filesystem in the same way as you have on a normal Windows Server. By using the C1's File and Directory wrappers you can be sure that code you write will be able to run on Azure as well.
C1 Console
The console is a whole subject on itself and has many many API's.
You can create your own trees using Composite.C1Console.Trees.TreeFacade or Composite.C1Console.Elements.ElementFacade and implementing a Composite.C1Console.Elements.Plugins.ElementProvider.IElementProvider.
You can use the Composite.C1Console.Events.ConsoleMessageQueueFacade to send messages from the server to the client to make it do things like open a message box, refreshing a tree, set focus on a specific element, open a new tab etc. etc.
Composite.C1Console.Workflow.WorkflowFacade is used for getting instances of specific workflows and interacting with them. Workflows is a very fundamental part of C1 and is the way multi-step operations are defined and executed. This makes it possible to save state of operation so ie. a 10 step wizard is persisted even if the server restarts or anything else unexpected happens. Workflows are build using Windows Workflow Foundation, so are you familiar with this, you should be feeling at home
There is also a wealth of javascript facades and methods you can hook into when writing extensions to the Console. Much more than i could ever cover here so i will refrain myself from even getting started on that subject here.
composite.config
A fundamental part of C1 is providers, almost everything is made up of providers, even much of the core functionality. Everything in the console from Perspectives to Trees and elements and actions are feeded into C1 with providers. All the standard functions, the datalayer and all the widgets for use with the Function Call editor is feeded into C1 with providers. All the localisation strings for use with the Resources, users and permissions, url formatters etc. is all providers.
Composite.Data.Plugins.DataProviderConfiguration
Here all providers that can respond to the methods on DataFacade, Get, Update, Delete, Add etc. are registered. Every provider informs the system which interfaces it can interact with and C1 makes sure to route all requests for specific interfaces to their respective dataproviders.
Composite.C1Console.Elements.Plugins.ElementProviderConfiguration
Here we're defining the perspectives and the trees inside the Console. All the standard perspectives you see when you start the Console the first time are configured here, no magic or black box involved.
Composite.C1Console.Elements.Plugins.ElementActionProviderConfiguration
Action providers are able to add new menuitems to all elements in the system, based on their EntityToken. This is very powerful when you want to add new functionality to existing content like versioning, extranet security, custom cut/paste and the list goes on.
Composite.C1Console.Security.Plugins.LoginProviderConfiguration
A LoginProvider is what the C1 console will use to authenticate a user and let you log in or not. Unfortunately this isn't very open but with some reflection you should be all set.
Composite.Functions.Plugins.FunctionProviderConfiguration
Composite C1 will use all the registered FunctionProviders to populate its internal list of functions on system startup.
Composite.Functions.Plugins.WidgetFunctionProviderConfiguration
WidgetProviders are used in things like the Function Call Editor or in Forms Markup to render custom UI for selecting data.
Composite.Functions.Plugins.XslExtensionsProviderConfiguration
Custom extensions for use in XSLT templates are registered here
And then we have a few sections for pure configuration, like caching or what to to parallelize but its not as interesting as the providers.
Defining and using sections
Sections in composite.config, and other related .config files are completely standard .Net configuration and obeys the rules thereof. That means that to be able to use a custom element, like ie. Composite.Functions.Plugins.WidgetFunctionProviderConfiguration it has to be defined as a section. A section has a name and refers to a type that would inherit from System.Configuration.ConfigurationSection. Composite uses the Microsoft Enterprise Libraries for handling most of these common things like configuration and logging and validation and therefor all Composites sections inherit from Microsoft.Practices.EnterpriseLibrary.Common.Configuration.SerializableConfigurationSection. Now, this type just has to have properties for all the elements we want to be able to define in the .config-file, and .Net will automatically make sure to wire things up for us.
If you want to access configuration for a particular section you would call Composite.Core.Configuration.ConfigurationServices.ConfigurationSource.GetSection(".. section name") and cast it to your specific type and your good to go.
Adding extra properties to already defined sections
Normally .Net would complain if you write elements or attributes in the .config files that aren't recognized by the type responsible for the section or for the element. This makes it hard to write a truly flexible module-system where external authors can add specific configuration options to their providers, and therefor we have the notion of a Assembler. Its a ConfigurationElement class with a Microsoft.Practices.EnterpriseLibrary.Common.Configuration.ObjectBuilder.AssemblerAttribute attribute assigned to it that in turns takes a Microsoft.Practices.EnterpriseLibrary.Common.Configuration.ObjectBuilder.IAssembler interface as argument that is responsible for getting these custom attributes and values from the element in the .config file and emit usable object from it. This way .Net won't complain about an invalid .config file, since we inject a ConfigurationElement object that has properties for all our custom attributes, and we can get hold of them when reading the configuration through the IAssembler
Slides
Some overview slides can be found on these lins
Overview
Extensibility points
Page request handling
Function system
Data system
Data type system
Inspiration and examples
The C1Contrib project on GitHub is a very good introduction how to interact with the different parts of C1. Its a collection of small packages, that can be used as it is, or for inspiration. There are packages that manipulates with dynamic types to enable interface inheritance. Other packages uses the javascript api in the console, while others show how to make Function Providers, Trees and hook commands unto existing elements. There is even examples of how to manipulate with the Soap webservice communication going on between client and server so you can make it do things the way you want it. And the list goes on.
We have an automated test suite, using Borland Silk Test 2008 R2 to carry out regression tests of a new in-house product.
The test script consistently refers to controls by their index:
Form.Control3 ...
We've made a "minor" change to the main form of the application, and now the control that used to have index 3 has index 4.
The easy, but tedious, fix is to edit the scripts to reference Control4 instead of Control3, but this remains pretty brittle.
How do we instead identify the controls by name - so instead of referencing Control3 we specify "the control named ribbon".
(We believe that referencing things by name will be significantly less brittle.)
We've tried the obvious:
Form.ribbon
which doesn't execute at all.
The primitive intellisense in the editor doesn't show much of use - no Controls property, no GetXX or FindXX methods.
Our application is written using C# on .NET 3.5, and does make use of third party controls.
SilkTest usually stores the information to locate the controls in you application in an .inc file. The part
Form.Control3 ...
you mentioned is a reference to the structure in that .inc file. When you application changes, you should be able to adapt your test scripts by simply updating the entry in the .inc file.