UI-Automation testing of obfuscated (WPF-) applications - wpf

I was wondering, how difficult it is to run UI-Tests, if the code has been obfuscated (especially regarding WPF-Applications when using testing frameworks that access automation properties of the application and arent image-based like e.g. Ranorex, TestStudio, TestComplete, Squish,...).
I can only find few informations about this, which imply, that testing should always be done before the code is being obfuscated, but not precisely why.
One might argue however, that tests should be run on the version that´s actually being shipped to the customer. Also if we´re using 3rd-Party components as part of our SW, we might not have the luxury of using a not-obfuscated version.
As far as I understand UI-Automation, the goal is to expose relevant properties of the application, so that they can be used not just by testing-frameworks, but also screen-readers and the like.
Therefore I can´t quite understand why there might be problems once the code is being obfuscated. The obfuscation itself shouldn´t influence the number of exposed properties at all or does it?

I can't speak for the others, but Ranorex relies on UIA (UIAutomation), an automation and accessibility framework, to automate WPF apps.
UIA is almost never affected by obfuscation. Also keep in mind that most obfuscation tools avoid obfuscating public members of public classes, which is what most UI controls use.
The only exceptions are rare cases in which you explicitly configure the obfuscation tool to obfuscate strings that might affect UIA, such as the AutomationProperties attached properties.
Another rather rare exception might have to do with reflection. If you use reflection (usually a bad idea, but sometimes unavoidable) to activate less reachable areas of your app, then obfuscation might pose a problem. This problem is easily solved by adding a few exceptions to the obfuscation tool, or running the tests before obfuscation.
Theoretically, it shouldn't matter whether you test the app before or after it's been obfuscated because obfuscation should theoretically not have any effect on the app's logic. In practice, there are some differences and this might affect your tests, though very rarely. Obfuscated apps tend to be a bit faster, and sometimes obfuscation mangles unexpected or unplanned code elements. So you have to ask yourself whether you want to perform UI automation on the app the users will actually get and catch obfuscation issues that might only affect UI automation, or test the app before obfuscation to ensure the app's behavior is correct regardless of whatever additional build and deployment steps might be waiting in the pipeline. Obviously, you'll have to deal with the possible repercussions of whichever approach you choose.
Another consideration for running the tests before obfuscation is that if application errors are encountered while running the tests, developers will have an easier time debugging them. However, if the programmers know how to debug obfuscated symbols (with code maps or the like), then this consideration is mostly moot.

Related

Why do I need to test Redux implementations while I can just test the DOM using a testing library?

I can't understand the full process of testing. I am using a testing library and I feel comfortable with testing just texts, labels and I feel I don't need to test any implementation in React or Redux, and this is what I read in the React testing library documentation, as Enzyme forced people always to test React implementations.
Now if all texts, labels and displayed values are right in my tests, this means that everything should be OK, but unfortunately when I am using Redux, I am always forced to test Redux implementations (mocking the store, mocking reducers, etc.) and async behaviors like fetching data and so on.
Why do I need and am forced to test Redux implementations as long as I can just test displayed values and passed tests will always indicate that Redux implementations work correctly in my project?
I can't understand the full process of testing
That's understandable. Testing and software quality assurance is a huge professional field (there's more to it than video game playtesting!)
I am using a testing library and I feel comfortable with testing just texts, labels and I feel I don't need to test any implementation in React or Redux
This is the wrong attitude to have. What you're describing is a very non-rigorous, slapdash, haphazard - and most importantly: shallow kind of testing.
Just because data on the user's computer screen appears correct, it doesn't mean it actually is. How do you know you aren't actually interacting with a mocked UI or that the backend database actually contains the new data? Or that there weren't any negative side effects (such as the system deleting everything else - and yes, that happened to me once...).
By analogy, you're saying you'd be happy to fly in an airplane that was tested only by ensuring that moving the control yoke resulted in the on-screen airplane icon moving in the right direction - even if the plane is still firmly on the ground. I would not want to fly on that plane.
Now if all texts, labels, displayed values all are true in my tests, this means that everything should be OK
No, it doesn't. See above.
but unfortunately when I am using Redux, I always forced to test Redux implementations (mocking the store, mocking reducers, etc.) and async behaviors like fetching data and so on.
Yes. You're being forced to the right thing. It's called the Pit of Success. Don't fight it. It's for the best. These libraries, platforms and frameworks are designed by people with more experience in software design than both of us, so if they tell us to do something we should do what they say - if we disagree we need to formalize our objections and duke it out in GitHub issues with academic rigour, not Stack Overflow posts arguing that something's unnecessary because you just don't feel like it. With apologies for being blunt, but I hope you never work in a safety-critical industry or sector until your attitude changes because I never want to see another Therac-25 incident - which was directly caused by people sharing your attitude towards software testing.
Why do I need and am forced to test Redux implementations as long as I can just test displayed values and passed tests will always indicate that Redux implementations work correctly in my project?
Because what you're describing does not provide for anywhere close to full code-coverage.
Here's a bunch of assorted things to consider:
Software testing (and systems testing in general, in any field) can generally be lumped into these categories:
Unit testing: testing a single "unit" of your code in isolation from everything else.
(Side-note: many people are currently abusing unit-testing frameworks like xUnit and MSTest to implement what are actually integration tests, so many people don't understand the real difference between integration and unit tests, which is depressing...). A "unit" would be something like a single class or function, not an entire GUI application.
Your current testing strategy is not a unit test because you aren't testing anything in isolation: you have to fire-up the entire application, including the React/Redux pipeline, web-server and an extremely complicated, multi-billion-dollar GUI web-browser program application.
Generally speaking: "if you need concrete dependencies (instead of fakes or mocks) to test something, it isn't a unit-test, it's an integration-test".
Integration testing: testing multiple components that interact with each other.
This is a rather abstract definition - but it can mean things like testing an application's business logic code when it's coupled to (a copy!) of your production database. This could also include testing a business-layer when there's a GUI attached to it, but GUI testing is not easily automated - so many, but not all, people would not consider what you're doing as a unit-test, especially as what you've described implies that your tests aren't testing for side-effects or verifying other changes of state elsewhere in the system (such as the database or a backend web-service).
There are other types of tests besides unit and integration - but those two are the main types of fully automated tests that every application should have - and every application should have good code-coverage from unit and integration tests especially. Do note that code-coverage does not imply exhaustiveness, and achieving 100% code-coverage is often a waste of time if that code includes trivial implementations like boilerplate code, or parameter validation code, or code generated by a tool that itself is very-well tested.
Generally speaking: if a piece of code "is complicated" or changes regularly, it should have "good" (75%+? 80%? 90%?) code-coverage.
Because testing software via GUIs is very difficult (and brittle: as GUIs are probably the part that experiences major changes the most in any user-facing software system) it's actually often not subject to automated testing anywhere near as much as it should - which is why it's important to ensure good coverage of non-GUI parts with automated testing which also reduces the amount of manual GUI testing that needs doing.
Finally, a big thing to consider with the Redux pattern in particular is that Redux is not specific to GUI applications. Theoretically you should be able to take a Redux application and copy and paste it into a server-side Node.js JavaScript application and hook it up to a virtual DOM and hey-presto, your application no longer requires client-side JavaScript to work! It also means you can get great code-coverage of your application just by using a special virtual DOM that's intended for testing rather than a real browser DOM - but your current approach will not work with this because you're talking about only verifying changes to a real browser DOM, not a virtual DOM.

LaunchDarkly: multi-platform feature flagging and branching questions

Looking at LaunchDarkly for feature flagging across our enterprise apps.
Two questions:
1) I'm concerned about being able to effectively flag features across our Java back end and React front ends (2 of them). What are some strategies that people use to define features appropriately so that they are easy to manage across multiple applications/platforms?
2) Have you replaced most/all of your git / Bitbucket / ?? branching workflow with feature flags and purely trunk - based development? If not, have you made significant changes to your existing git / Bitbucket branching strategy?
Disclamer: I work at DevCycle
I'm a few years late, but, I really wanted to make sure anyone finding their way to this question has a little more information.
1) While levlaz provided an answer explaining that you should put your management as far up the stack as possible, I don't necessarily agree that this is the best approach to consider first.
Consider this: A simple setup of a single feature across multiple platforms
Within DevCycle (and others), when you create a Feature, it is available across all platforms and the API.
You simply request the features for a user on any platform, and if the user qualifies, you'll receive it. There is no extra setup necessary to enable it on various platforms.
This means that if a feature is meant to be accessed on either your Java backend or React frontend, you can be guaranteed that the feature will be available at the correct times for the correct user/service regardless of where you call it from.
In short: one single Feature is managed across all platforms in one spot, with one toggle (if desired).
Another Approach: A single feature across multiple platform with different toggles or use cases.
You could very easily simply create multiple flags for each individual platform a feature is meant to be available on, and manage each individually. However, this isn't entirely necessary!
Within a feature setup, you can simply have two separate rules defining different variations being delivered to each platform. For example, you can set up a simple rule which ensures that Java will receive the feature, but React would not.
A unique DevCycle approach: Managing multiple platforms independently.
Here is something DevCycle offers that would may handle your use case in a unique way:
Imagine every single time you create a feature, both a Java and React version of that feature are created.
These platforms would be managed separately within each feature, meaning that there is no potential of any accidental feature data bleeding between platforms in event that a feature doesn't exist on one platform but it does on another.
You can set up each platform as an entirely separate entity, meaning they would use different SDK keys, and all targeting will always be separate.
In the example above for example, the feature would be entirely disabled and not available in any Java SDKs calling out to DevCycle, but it would be available in React.
tl;dr
It's up to you how you want to manage things across platforms. DevCycle makes it easy to do this however you'd like: have all features across all platforms, splitting up your platforms, or just choosing to target differently depending on the feature.
2) Like levlaz said, that is the ideal, but you'll likely never want to achieve fully trunk-based nirvana, as there are a lot of use cases for having various environments and paths for your team to take in various scenarios.
That said, we've seen a lot of folks successfully get REALLY close by using Feature Flags.
I wouldn't suggest removing your build pipelines and CI/CD in favor of feature flags, instead, feature flags enhance those.
For example, with feature flags, you can remove the concept of feature branches and large feature pull requests. Instead, ensure that everything that ever gets put into production is always behind a feature flag. To ensure this happens, you can use workflow tools like github actionsthat do these safety checks for you. With these guards in place, you should now always be able to simply push through to prod without any concerns and run your deploy scripts on each merge. Then you can just target your internal / QA users for testing, and not worry about things hitting prod users!
You may still want to have some sort of disaster recovery environment and local environments, so never truly hitting a pure trunk, but you can get close!
[Disclamer: I work at LaunchDarkly]
For your first question, my general recommendation is to put flags as "high up on the stack" as possible. At the end of the day, you are making a decision somewhere. Where you put that decision point is entirely up to you. Within LaunchDarkly the flags are agnostic to the implementation so a single flag can live on the server, mobile, and client-side without any issues. Keep things simple.
For your second question, in practice, it is very rare to see teams fully make the switch to trunk-based development. This is the goal of 99% of the teams that I work with but depending on if you have a greenfield or a brownfield project the complexity of making the switch can be not worth the effort.
Lastly, Our CTO wrote a book this year called "Effective Feature Management"[1]. If you have not heard of this, I would recommend you take a look. I think you'll find some great insights there.
https://launchdarkly.com/effective-feature-management-ebook/

Logic app or Web app?

I'm trying to decide whether to build a Logic App or a Web App.
It has to do things I'm quite comfortable doing in C#: receive messages in various formats (a few thousand per day), translate them, make API calls and forward them. None of the endpoints are widely used, so the out-of-the-box connectors won't be a benefit. Some require custom headers, the contents of which are calculated using a hashing algorithm. Some of the work involves converting Json into XML and vice-versa.
From what I've read, one of the key points of difference of Logic Apps are that you don't have to write any code. Since our organisation is actually quite comfortable with code, that doesn't feel like it'll actually be a benefit.
Am I missing something? Are there any compelling reasons why a Logic App would be better than a Web App in this instance?
Using Logic Apps has a few additional benefits over just writing code which include:
Out of box monitoring. For every execution you get to see exactly what happened in each step of the process with a monitoring view that replicates your Logic App design view.
Built in failure handling. Logic Apps will automatically retry calls on failure cases and also allows you to either customize the retry policy or have a custom retry policy with a do-until pattern.
Out of box alerting. You can configure alerts to inform you of failures.
Serverless. You don't worry about sizing or scaling and you pay by consumption.
Faster development. Logic Apps allows you to build out the solution faster especially as you consider that you don't have to code for monitoring views, alerting, and error handling that comes out of the box with Logic Apps.
Easy to extend. If you are already using a Logic App access to over a 125 connectors to various services will make it easy to add business value or making it smarter by including things like cognitive services to your workflow with very little extra effort.
I've decided to keep away from Logic Apps for these reasons:
It is not supported outside Azure. We aren't tied to any other providers, and to use Logic Apps would break that independence.
I don't know how much of the problem is readily soluble using Logic Apps. (It seems I will be solving all sorts of problems which wouldn't be problems if I was using C#. This article details some issues encountered while developing a simple process using an earlier version of Logic Apps.)
Nobody has come up with an argument more compelling than the reasons I've given above (especially the first one) why we should use it, so it would be a gamble with little to gain and plenty to lose.
You can think of Logic Apps as an orchestrator - something that takes external pieces of functionality, and weaves a workflow together.
It has nothing to do with your requirement of "writing code" - your code can be external functions on any platform - on-prem, AWS, Azure, Zendesk, and all of your code can be connected together using Logic Apps.
Regardless of which platform you choose, you will still have cross-cutting concerns such as monitoring, logging, alerting, deployments, etc, and Logic Apps addresses very robustly all of those requirements.

How to prevent an applications DLL to be decompiled?

As I know there are some applications that decompile DLLs to get source codes from application files.
Not only I don't want others to have the sources but also I don't want others to use them, I mean the DLL files. so how should i lock the DLLs and how safe they are ?
Before I get into anything else, I will state that it is impossible to protect your application entirely.
That being said, you can still make things more difficult. There are many obfuscators out there that will help you make it more difficult for someone to decompile your application and understand it.
http://en.wikipedia.org/wiki/List_of_obfuscators_for_.NET
.NET obfuscation tools/strategy
That's truly the best you can hope for.
Personally, I really wouldn't bother going too deep, if at all. You'll find that you are either spending too much money or time (or both) trying to protect your application from no-gooders. These are the same people who, no matter what barriers you throw up at them, will continue to try and given the nature of managed languages, they will most likely succeed. In fact, most obfuscators can be deobfuscated with simple tools... In the meantime, you've let other important features and bug fixes slip by because you spent more time and effort on security measures.
Obfuscation is one way to protect your code. Again, the solution is relative as per your needs. If you have a super secretive program, then you would want to explore more expensive and in-dept strategies.
However, if you are developing a business application or such thing which would not be worth a lot of any hacker's time to reverse engineer, minimal to normal obfuscation strategies are good enough. As the main answer suggests, look at those links.
Recently, I came upon ConfuseEx, a free open-source obfuscator that does the job for WPF apps and more. It seems to be very powerful, effective and customizable.
ConfuseEx on Github
For DLLs there is almost nothing we can do , confusing the files is the best way , but public member will remain in the way they were before , but if you pack them in your exe file , and confuse them , no one can use them easily .
I used ConfuserEX and it was very easy to use and effective .

Does TDD apply well when developing an UI?

What are your opinions and experiences regarding using TDD when developing an user interface?
I have been pondering about this question for some time now and just can't reach a final decision. We are about to start a Silverlight project, and I checked out the Microsoft Silverlight Unit Test Framework with TDD in mind, but I am not sure how to apply the approach to UI-development in general - or to Silverlight in particular.
EDIT:
The question is about if it is practical thing to use TDD for UI-development, not about how to do separation of concerns.
Trying to test the exact placement of UI components is pointless. First because layout is subjective and should be "tested" by humans. Second, because as the UI changes you'll be constantly rewriting your tests.
Similarly, don't test the GUI components themselves, unless you're writing new components. Trust the framework to do its job.
Instead, you should be testing the behavior that underlies those components: the controllers and models that make up your application. Using TDD in this case drives you toward a separation of concerns, so that your model is truly a data management object and your controller is truly a behavior object, and neither of them are tightly coupled to the UI.
I look at TDD from a UI perspective more from the bare acceptance criteria for the UI to pass. In some circles, this is being labeled as ATDD or Acceptance Test Driven Development.
Biggest over-engineering I have found in using TDD for UIs is when I got all excited about using automated tests to test look and feel issues. My advice: don't! Focus on testing the behavior: this click produces these events, this data is available or displayed (but not how it's displayed). Look and Feel really is the domain of your independent testing team.
The key is to focus your energy on "High Value Add" activities. Automated style tests are more of a debt (keeping them up to date) than a value add.
If you separate you logic from the actual GUI code, you can easily use TDD to build the logic and it will be a lot easier to build another interface on top of your logic as well, if you ever need that.
I can't speak to Microsoft Silverlight, but I never use TDD for any kind of layout issues, just is not worth the time. What works well is using Unit Testing for checking any kind of wiring, validation and UI logic that you implemented. Most systems provide you with programmatic access to the actions the user takes, you can use these to assert that your expectations are correctly met. E.g. calling the click() method on a button should execute what ever code you intended. Selecting an item in a list view changes all the UI elements content to this items properties ...
Based on your edit, here's a little more detail about how we do it on my current team. I'm doing Java with GWT, so the application to Silverlight might be a bit off.
Requirement or bug comes in to the developer. If there is a UI change (L&F) we do a quick mock up of the UI change and send it over to the product owner for approval. While we are waiting on that, we start the TDD process.
We start with at least on of either a Web test (using Selenium to drive user clicks in a browser), or a "headless" functional test using Concordion, FiT or something like it. Once that's done and failing, we have a high level vision of where to attack the underlying services in order to make the system work right.
Next step is to dig down and write some failing unit and integration tests (I think of unit tests as stand-alone, no dependencies, no data, etc. Integration tests are fully wired tests that read/write to the database, etc.)
Then, I make it work from bottom up. Sounds like your TDD background will let you extrapolate the benefits here. Refactor on the way up as well....
I think this blog post by Ayende Rahien answers my question nicely using a pragmatic and sound approach. Here are a few quotes from the post:
Testing UI, for example, is a common
place where it is just not worth the
time and effort.
...
Code quality, flexibility and the
ability to change are other things
that are often attributed to tests.
They certainly help, but they are by
no mean the only (or even the best)
way to approach that.
Tests should only be used when they add value to the project, without becoming the primary focus. I am finally quite certain that using test-driven development for the UI can quickly become the source of much work that is simply not worth it.
Note that it seems the post is mainly about testing AFTER things have been built, not BEFORE (as in TDD) - but I think the following golden rule still applies: The most important things deserve the greatest effort, and less important things deserve less effort. Having a unit-tested UI is often not THAT important, and as Ayende writes, the benefit of using TDD as development model is probably not so great - especially when you think of that developing an UI is normally a top-down process.
GUIs by their very nature are difficult to test, so as Brian Rasmussen suggests keep the dialog box code separate from the the GUI code.
This is the Humble Dialog Box Pattern.
For example, you may have a dialog box where details (e.g. credit card number) are input and you need to verify them. For this case you would put the code that checks the credit card number with the Luhn algorithm into a separate object which you test. (The algorithm in question just tests if the number is plausible - it's designed for checking for transcription errors.)
At my workplace we use TDD and we actually unit test our UI code (for a web application) thanks to Apache Wicket's WicketTester but it's not testing that some descriptive label is before a text field or something like that, instead we test that hierarchy of the components is somewhat correct ("label is in the same subset as text field") and the components are what they're supposed to be ("this label really is a label").
Like others have already said, it's up to the real person to determine how those components are placed on the UI, not the programmer, especially since programmers have a tendency of doing all-in-one/obscure command line tools instead of UIs which are easy to use.
Test-Driven Development lends itself more to developing code than for developing user-interfaces. There are a few different ways in which TDD is performed, but the preferred way of true TDD is to write your tests first, then write the code to pass the tests. This is done iteratively throughout development.
Personally, I'm unsure of how you would go about performing TDD for UI's; however, the team that I am on performs automated simulation tests of our UIs. Basically, we have a suite of simulations that are run every hour on the most recent build of the application. These tests perform common actions and verify that certain elements, phrases, dialogs, etc, etc, properly occur based on, say, a set of use cases.
Of course, this does have its disadvantages. The downside to this is that the simulations are locked into the code representing the case. It leaves little room for variance and is basically saying that it expects the user to do exactly this behavior with respect to this feature.
Some testing is better than no testing, but it could be better.
Yes, you can use TDD with great effect for GUI testing of web apps.
When testing GUI's you typically use stub/fake data that lets you test all the different state changes in your gui. You must separate your business logic from your gui, because in this case you will want to mock out your business logic.
This is really neat for catching those things the testers always forget clicking at; they get test blindness too !

Resources