I am using AutoJsContext in geckobrowser with that I am using evaluate scrpit. But now i am using webview2 is there a how can i get this - wpf

I am using AutoJsContext in geckobrowser with that I am using evaluate scrpit and asble to get data. But now i am using webview2 is there a how can i get this.
Gecko browser:
using (AutoJSContext context = new AutoJSContext(browser.Window))
{
var userIdResult = context
.EvaluateScript("userId", (nsIDOMWindow)browser.Window.DomWindow);
}
Above is the code I use in gecko browser. now i need to get user id from webview2. Please help me in this issue
I am not able to get any alternative

You can use CoreWebView2.ExecuteScriptAsync to inject script into the current top-level document of the WebView2 and receive the result as a JSON string. The method doesn't have a way to specify an execution context. Script is executed in the global context of the top-level document similar to running script in the console of DevTools in the browser.
For example something like the following:
var resultAsJSON = await coreWebView2.ExecuteScriptAsync('window.userId');

Related

How to make Selenium-Wire perform an indirect GraphQL AJAX request I expect and need?

Background story: I need to obtain the handles of the tagged Twitter users from an attached Twitter media. There's no current API method to do that unfortunately (see https://twittercommunity.com/t/how-to-get-tags-of-a-media-in-a-tweet/185614 and https://github.com/twitterdev/open-evolution/issues/34).
I have no other choice but to scrape, this is an example URL: https://twitter.com/justinwood_/status/1626275168157851650/media_tags. This is the page which pops up when you click on the tags link under the media of the parent Tweet: https://twitter.com/justinwood_/status/1626275168157851650/
The React generated DOM is deep and ugly, but would be scrapeable, however I do not want to log in with any account to get banned. Unfortunately when you visit https://twitter.com/justinwood_/status/1626275168157851650/media_tags in an Incognito window the popup shows up dead empty. However when I dig into the network requests the /TweetDetail GraphQL endpoint is full of messages about the anonymous page visit, fortunately it still contains the list of handles I need despite of all of this.
So what I need to have is a scraper which is able to process JavaScript, and capture the response for that specific GraphQL call. Selenium uses a headless Chrome under the hood, so it is able to process JavaScript, and Selenium-Wire offers the ability to capture the response.
Unfortunately my crafted Selenium-Wire script only has the TweetResultByRestId and UsersByRestId GraphQL requests but is missing the TweetDetail. I don't know what to tweak to make all the requests to happen. I iterated over a ton of Chrome options. Here is a variation of my script:
from seleniumwire import webdriver
from selenium.webdriver.chrome.service import Service
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument("--disable-extensions")
chrome_options.add_argument("--disable-gpu")
chrome_options.add_argument("--no-sandbox")
chrome_options.add_argument("--headless") # for Jenkins
chrome_options.add_argument("--disable-dev-shm-usage") # Jenkins
chrome_options.add_argument('--start-maximized')
chrome_options.add_argument('--window-size=1900,1080')
chrome_options.add_argument('--ignore-certificate-errors-spki-list')
chrome_options.add_argument('--ignore-ssl-errors')
selenium_options = {
'request_storage_base_dir': '/tmp', # Use /tmp to store captured data
'exclude_hosts': ''
}
ser = Service('/usr/bin/chromedriver')
ser.service_args=["--verbose", "--log-path=test.log"]
driver = webdriver.Chrome(service=ser, options=chrome_options, seleniumwire_options=selenium_options)
tweet_id = "1626275168157851650"
twitter_media_url = f"https://twitter.com/justinwood_/status/{tweet_id}/media_tags"
driver.get(twitter_media_url)
driver.wait_for_request("/TweetDetail", timeout=10)
Any ideas?
Apparently it looks like I'd rather need to scrape the parent Tweet URL https://twitter.com/justinwood_/status/1626275168157851650/ and right now it seems my craved GraphQL call happens. Probably I got confused while trying 100 combinations.

Way to implement resetApp for React Native with wdio-cli/? - webdriverio

Hellow there! I am using the wdio/cli so I created the wdio.conf.js with this command, then I start doing the test. But the issues is when have more than one test in a single or multiple test files.
In the test file I have something like this:
beforeEach(async function() {
$('~home').waitForDisplayed(81000, false);
});
Where home tag is a tag in the first view when the app runs in first screen. And appear this error:
element ("~home") still not displayed after 10000ms
So need to do kind of driver.resetApp()/ But dont know how to do it, what import do I need to do etc.
Did you try resetApp? You can't user driver as "main object" - Everything is under browser variable. Try this
//async
await browser.resetApp();
//sync
browser.resetApp();
Check Appium doc + wdio documentation.

Solr 4.3.1 Data-Import command

I'm currently using Solr 4.3.1. i have configured dih for my solr. i would like to do a full import through command prompt. I know the url will be something like this http://localhost:8983/solr/corename/dataimport?command=full-import&clean=true&commit=true is there any method i can do this without using curl ?
Thanks
Edit
string Text = "http://localhost:8983/solr/Latest_TextBox/dataimport?command=full-import&clean=true&commit=true";
var wc = new WebClient();
var Import = wc.DownloadString(Text);
Currently using the above code
Call it like a normal REST url that's it !! I am using it in my application for importing and indexing data from my Local drive and it just works fine ! :) . Use HttpURLConnection to make a request and capture response to see whether it was successful or not . You don't need any specific API to do that . This is a sample code to make a GET request correctly in C# .Try data import handler url with this, it may work !
Console.WriteLine("Making API Call...");
using (var client = new HttpClient(new HttpClientHandler { AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate }))
{
client.BaseAddress = new Uri("https://api.stackexchange.com/2.2/");
HttpResponseMessage response = client.GetAsync("answers?order=desc&sort=activity&site=stackoverflow").Result;
response.EnsureSuccessStatusCode();
string result = response.Content.ReadAsStringAsync().Result;
Console.WriteLine("Result: " + result);
}
Console.ReadLine();
}
}
}
You'll have to call the URL in some way - Solr only operates through a REST API. There is no command line API (the command line tools available just talk to the API). So use your preferred way to talk to a HTTP endpoint, that being curl, wget, GET or what's available for your programming language of choice.
The bundled solrCli application does not have any existing command for triggering a full-import as far as I were able to see (which would just talk to the REST API by calling the URL you've already referenced).

Parsing Swagger JSON data and storing it in .net class

I want to parse Swagger data from the JSON I get from {service}/swagger/docs/v1 into dynamically generated .NET class.
The problem I am facing is that different APIs can have different number of parameters and operations. How do I dynamically parse Swagger JSON data for different services?
My end result should be list of all APIs and it's operations in a variable on which I can perform search easily.
Did you ever find an answer for this? Today I wanted to do the same thing, so I used the AutoRest open source project from MSFT, https://github.com/Azure/autorest. While it looks like it's designed for generating client code (code to consume the API documented by your swagger document), at some point on the way producing this code it had to of done exactly what you asked in your question - parse the Swagger file and understand the operations, inputs and outputs the API supports.
In fact we can get at this information - AutoRest publically exposes this information.
So use nuget to install AutoRest. Then add a reference to AutoRest.core and AutoRest.Model.Swagger. So far I've just simply gone for:
using Microsoft.Rest.Generator;
using Microsoft.Rest.Generator.Utilities;
using System.IO;
...
var settings = new Settings();
settings.Modeler = "Swagger";
var mfs = new MemoryFileSystem();
mfs.WriteFile("AutoRest.json", File.ReadAllText("AutoRest.json"));
mfs.WriteFile("Swagger.json", File.ReadAllText("Swagger.json"));
settings.FileSystem = mfs;
var b = System.IO.File.Exists("AutoRest.json");
settings.Input = "Swagger.json";
Modeler modeler = Microsoft.Rest.Generator.Extensibility.ExtensionsLoader.GetModeler(settings);
Microsoft.Rest.Generator.ClientModel.ServiceClient serviceClient;
try
{
serviceClient = modeler.Build();
}
catch (Exception exception)
{
throw new Exception(String.Format("Something nasty hit the fan: {0}", exception.Message));
}
The swagger document you want to parse is called Swagger.json and is in your bin directory. The AutoRest.json file you can grab from their GitHub (https://github.com/Azure/autorest/tree/master/AutoRest/AutoRest.Core.Tests/Resource). I'm not 100% sure how it's used, but it seems it's needed to inform the tool about what is supports. Both JSON files need to be in your bin.
The serviceClient object is what you want. It will contain information about the methods, model types, method groups
Let me know if this works. You can try it with their resource files. I used their ExtensionLoaderTests for reference when I was playing around(https://github.com/Azure/autorest/blob/master/AutoRest/AutoRest.Core.Tests/ExtensionsLoaderTests.cs).
(Also thank you to the Denis, an author of AutoRest)
If still a question you can use Swagger Parser library:
https://github.com/swagger-api/swagger-parser
as simple as:
// parse a swagger description from the petstore and get the result
SwaggerParseResult result = new OpenAPIParser().readLocation("https://petstore3.swagger.io/api/v3/openapi.json", null, null);

how to change the url for SolrNet Client

I am a newbie in solrnet and my question is how to change the url for SolrNet Client.
I found this on wiki
initailizing code
Startup.Init<Product>("http://localhost:8983/solr");
invoking code
var solr = ServiceLocator.Current.GetInstance<ISolrOperations<Product>>();
but I dont know how to change the url , could someone tell me how to do this, I am really thanks.
It cannot be changed with existing SOLRNet code as it is implemented on singleton pattern.
You have to download the code from github.
Currently following exception has been thrown
"Key ... already registered in container". You can change code in a way that it will always create new instance. (by pass Singleton pattern)
The default request handler is "/select". So SolrNet will send your requests to
http://localhost:8983/solr/select
If you wish to invoke a different request handler, you will need to get a instance of the SolrQueryExecuter and set the Handler property, accordingly.
Assuming you have a request handler named "/browse":
Startup.Init<Product>("http://localhost:8983/solr");
var executor = ServiceLocator.Current.GetInstance<ISolrQueryExecuter<Product>>() as SolrQueryExecuter<Product>;
if (executor != null)
{
executor.Handler = "/browse";
}

Resources