We are trying to create a CosmosDb data source in Azure Search to connect to it later on with an indexer.
However, when trying to create the data source, I get a cryptic error message without code:
{
"error": {
"code": "",
"message": "The request is invalid. Details: dataSource : Cannot create an abstract class.\r\n"
}
}
Here is the PUT request sent to azure search (the api-key and connection strings have been verified as correct):
{
"name": "datasourceName",
"description": "Data source on CosmosDb collection x and partition y",
"type": "documentdb",
"credentials": {
"connectionString": "***"
},
"container": {
"name": "collectionName",
"query": "SELECT * FROM c WHERE c.Culture = 'y' AND c.Id LIKE 'prefix%'"
},
"dataChangeDetectionPolicy": {
"highWaterMarkColumnName": "_ts"
}
}
The URL used for that request is:
https://<servicename>.windows.net/datasources/<datasourceName>?api-version=2017-11-11-Preview
I could not find anything in the documentation about creating data sources responses and some guidance would be welcome.
Regards
You need to include OData type for the change detection policy:
{
"#odata.type" : "#Microsoft.Azure.Search.HighWaterMarkChangeDetectionPolicy",
"highWaterMarkColumnName" : "[a row version or last_updated column name]"
}
Delete dataChangeDetectionPolicy in post request which is Abstract base class for data change detection policies.
Also as Carey MacDonald said, in connectionString, don't forget add the Database part.
Here is snapshot:
Thanks for everyone's responses.
To stay consistent with the question (PUT not POST request) I'm posting the answer as it's a mix of feedback from the previous answers and comments.
So after adding:
"#odata.type" : "#Microsoft.Azure.Search.HighWaterMarkChangeDetectionPolicy"
To the dataChangeDetectionPolicy json object (when sending the PUT request to Azure Search) and after adding the Database part in the connectionString it now works.
Regards
Related
Trying to call the below API by using axios.getmethod. I've bypassed the CORS by using the Moesif CORS google extension.
The API required the tokens in order to pull the result by using POSTMAN.
Insert the valid tokens: Getting the CORS error although the CORS extension is enabled.
CORS Error
Without the tokens: Getting the 401 Unauthorized with CORS extension enabled as well.
401 Unauthorized
I'm sort of confused, is it either my token unauthorized issue or the CORS issue here? could someone please advise? However, if I called the other API that does not require the token I'm able to get the result without any issue with the CORS extension enabled.
Sharing my example codes here:
const tokenStr = 'abc1234'; // example
const config = {
headers: { Authorization: `Bearer ${tokenStr}` }
};
let dcqoapi =
"http://quote.dellsvc/v3/quotes?number=" + Quote + "&version=" + Version;
const calldcqoapi = () => { //assign a variable for a call function
Axios.get (dcqoapi,config).then(
(response) => {
console.log(response);
})
};
You need to trace HTTP responses more scientifically - using browser tools or a tool such as Charles proxy. Bear in mind that APIs are sometimes poorly implememted and don't return CORS headers correctly.
For an approach, see Step 15 of my blog post. Do you get response headers to that work for the browser in all of these cases?
Call API without an access token
Call API with an invalid access token
Call API with a valid access token
The issue was resolved via disabling the web security from Chrome. Here are the below actions I've taken: [WORKS FOR LOCALHOST]
Create a shortcut on your desktop.
Right-click on the shortcut and click Properties.
Edit the Target property.
Set it to "C:\Program Files (x86)\Google\Chrome\Application\chrome.exe" --disable-web-security --user-data-dir="C:/ChromeDevSession"
Open Google chrome (You should see the message "disable-web-security" on top of the page)
Run your app
Copy your localhost URL and paste it to the Chrome
It works perfectly now for calling the API with headers tokens.
By doing the above steps, we had fully bypassed all the security including the CORS. Hence, Moesift CORS is no longer required.
As I'm using POSTMAN for calling the API with the Headers.
Some little changes from my codes, instead of putting 'Bearer' and I've removed it.
The 'bearer' is for authorization.
Headers are not considering bearer tokens, it's just a key so we shouldn't enter the 'bearer' for the POSTMAN headers.
let config = {
headers: { Authorization: 'PlACE YOUR TOKENS HERE'}
};
let dcqoapi =
"https://quote.dellsvc/v3/quotes?number="+ Quote +"&version=" + Version // set DCQO API
const calldcqoapi = () => { //assign a variable for a call function
Axios.get (dcqoapi,config).then(
(response) => {
console.log(response);
})
};
I simply want to debug a controller but I can't watch the variables I get from 2sxc functions.
I tried to log varables via Log4Net writting :
private static readonly ILog Logger = LoggerSource.Instance.GetLogger(typeof(MyClassName));
but the type ILog is not known in a 2sxc controller. Am I missing a reference?
I also found this snippet:
using DotNetNuke.Services.Log.EventLog;
var objEventLog = new EventLogController();
objEventLog.AddLog("Sample Message", "Something Interesting Happened!", PortalSettings, UserId, EventLogController.EventLogType.ADMIN_ALERT)
But I don't know what to send to "PortalSettings" and I don't any clue in the helpers of the 2sxc programming interface.
How do you guys debug 2sxc controllers and log events (not only for debuging)?
Thank you for your help!
Credit of these snippets: Scott McCulloch (https://www.smcculloch.com/code/logging-to-the-dnn-event-log)
This gives part of the answer: http://www.dnnsoftware.com/community-blog/cid/141723/using-log4net-with-dotnetnuke. And, it looks like the namespace is DotNetNuke.Instrumentation.
As for PortalSettings, that's the portal settings for your portal. I think that you'd need to reference DotNetNuke.Entities.Portals, and then use PortalController to retrieve the portal settings object.
Joe Craig's previous post helped me a lot.
So, in a 2sxc application, I now can log in the DNN event log (not the Windows one):
#using DotNetNuke.Services.Log.EventLog;
#using DotNetNuke.Entities.Portals;
#{
var aujourdhui = DateTime.Now;
var objEventLog = new EventLogController();
PortalSettings PortalSettings = new PortalSettings();
objEventLog.AddLog("Debug info", "Variable \"Aujourdhui\" contains: " + aujourdhui.ToString("dddd d MMMM yyyy"), PortalSettings, #Dnn.User.UserID, EventLogController.EventLogType.ADMIN_ALERT);
}
The only little problem is that this PortalSettings returns the first portal even if my 2sxc app runs on the second portal (id=1). I must be missing something. But for now and what I need (debugging), thats Ok for me!
This is a very similar question to this
aspnet identity invalid token on confirmation email
but the solutions are not valid because I am using the new ASP.NET Core 1.0 that includes ASP.NET Core Identity.
My scenario is as follows:
In the back end (ASP.NET Core) I have a function that sends a password reset email with a link. In order to generate that link I have to generate a code using Identity. Something like this.
public async Task SendPasswordResetEmailAsync(string email)
{
//_userManager is an instance of UserManager<User>
var userEntity = await _userManager.FindByNameAsync(email);
var tokenGenerated = await _userManager.GeneratePasswordResetTokenAsync(userEntity);
var link = Url.Action("MyAction", "MyController", new { email = email, code = tokenGenerated }, protocol: HttpContext.Request.Scheme);
//this is my service that sends an email to the user containing the generated password reset link
await _emailService.SendPasswordResetEmailAsync(userEntity , link);
}
this would generate an email with a link to:
http://myapp:8080/passwordreset?code=CfDJ8JBnWaVj6h1PtqlmlJaH57r9TRA5j7Ij1BVyeBUpqX+5Cq1msu9zgkuI32Iz9x/5uE1B9fKFp4tZFFy6lBTseDFTHSJxwtGu+jHX5cajptUBiVqIChiwoTODh7ei4+MOkX7rdNVBMhG4jOZWqqtZ5J30gXr/JmltbYxqOp4JLs8V05BeKDbbVO/Fsq5+jebokKkR5HEJU+mQ5MLvNURsJKRBbI3qIllj1RByXt9mufGRE3wmQf2fgKBkAL6VsNgB8w==
Then my AngularJs application would present a view with a form to enter and confirm the new password, and would PUT a JSON object with the new password and the code that got from the query parameter in the URL.
Finally my back end would get the PUT request, grab the code and validate it using Identity like this:
[HttpPut]
[AllowAnonymous]
[Route("api/password/{email}")]
public async Task<IActionResult> SendPasswordEmailResetRequestAsync(string email, [FromBody] PasswordReset passwordReset)
{
//some irrelevant validatoins here
await _myIdentityWrapperService.ResetPasswordAsync(email, passwordReset.Password, passwordReset.Code);
return Ok();
}
The problem is that Identity responds with an
Invalid token
error. I have found that the problem is that the codes don't match and the above code would be received back in the JSON object in the PUT request as follows:
CfDJ8JBnWaVj6h1PtqlmlJaH57r9TRA5j7Ij1BVyeBUpqX 5Cq1msu9zgkuI32Iz9x/5uE1B9fKFp4tZFFy6lBTseDFTHSJxwtGu jHX5cajptUBiVqIChiwoTODh7ei4 MOkX7rdNVBMhG4jOZWqqtZ5J30gXr/JmltbYxqOp4JLs8V05BeKDbbVO/Fsq5 jebokKkR5HEJU mQ5MLvNURsJKRBbI3qIllj1RByXt9mufGRE3wmQf2fgKBkAL6VsNgB8w==
Notice that where there was + symbols now there are spaces symbols and obviously that causes Identity to think the tokens are different.
For some reason Angular is decoding the URL query parameter in a different way that was encoded.
How to resolve this?
This answer https://stackoverflow.com/a/31297879/2948212 pointed me in the right direction. But as I said it was for a different version and now it is slightly different solution.
The answer is still the same: encode the token in base 64 url, and then decode it in base 64 url. That way both Angular and ASP.NET Core will retrieve the very same code.
I needed to install another dependency to Microsoft.AspNetCore.WebUtilities;
Now the code would be something like this:
public async Task SendPasswordResetEmailAsync(string email)
{
//_userManager is an instance of UserManager<User>
var userEntity = await _userManager.FindByNameAsync(email);
var tokenGenerated = await _userManager.GeneratePasswordResetTokenAsync(userEntity);
byte[] tokenGeneratedBytes = Encoding.UTF8.GetBytes(tokenGenerated);
var codeEncoded = WebEncoders.Base64UrlEncode(tokenGeneratedBytes);
var link = Url.Action("MyAction", "MyController", new { email = email, code = codeEncoded }, protocol: HttpContext.Request.Scheme);
//this is my service that sends an email to the user containing the generated password reset link
await _emailService.SendPasswordResetEmailAsync(userEntity , link);
}
and when receiving back the code during the PUT request
[HttpPut]
[AllowAnonymous]
[Route("api/password/{email}")]
public async Task<IActionResult> SendPasswordEmailResetRequestAsync(string email, [FromBody] PasswordReset passwordReset)
{
//some irrelevant validatoins here
await _myIdentityWrapperService.ResetPasswordAsync(email, passwordReset.Password, passwordReset.Code);
return Ok();
}
//in MyIdentityWrapperService
public async Task ResetPasswordAsync(string email, string password, string code)
{
var userEntity = await _userManager.FindByNameAsync(email);
var codeDecodedBytes = WebEncoders.Base64UrlDecode(code);
var codeDecoded = Encoding.UTF8.GetString(codeDecodedBytes);
await _userManager.ResetPasswordAsync(userEntity, codeDecoded, password);
}
I had a similar issue and I was encoding my token but it kept on failing validation and the problem turned out to be this : options.LowercaseQueryStrings = true;
Do not set true on options.LowercaseQueryStrings this alters the validation token's integrity and you will get Invalid Token Error.
// This allows routes to be in lowercase
services.AddRouting(options =>
{
options.LowercaseUrls = true;
options.LowercaseQueryStrings = false;
});
I have tried the answers above, but this guide has helped me. Basically, you would need to encode the code, otherwise, you would encounter some weird bugs. To summarise you would need to do this:
string code = HttpUtility.UrlEncode(UserManager.GenerateEmailConfirmationToken(userID));
After this, if it is applicable to you, decode code:
string decoded = HttpUtility.UrlDecode(code)
After scaffolding the ConfirmEmail page in my Asp.Net Core 3.0 project I ran into the same problem.
Removing the following line from the OnGetAsync method in ConfirmEmail.cshtml.cs fixed the problem:
code = Encoding.UTF8.GetString(WebEncoders.Base64UrlDecode(code));
In the scaffolded Login page, the code is added to the callbackUrl which is then URL encoded using HtmlEncoder.Default.Encode(callbackUrl). When the link is clicked the decoding is automatically done and the code is like it should be to confirm the email.
UPDATE:
I noticed that during the Forgot Password process the code is Base64 encoded before being put in the callbackUrl which then means that the Base64 decode IS necessary.
A better solution would thus be to add the following line to wherever the code is generated before adding it to the callbackUrl.
code = WebEncoders.Base64UrlEncode(Encoding.UTF8.GetBytes(code));
Here is a link to the issue which has been fixed.
(according this post: https://stackoverflow.com/a/27943434/9869427)
For the resetPasswordAsync (identity manager) "token invalid" problem... because "+" become space in url...
use Uri.EscapeUriString
exemple: in my sendResetPasswordByMailAsync
var token = "Aa+Bb Cc";
var encodedToken = Uri.EscapeDataString(token);
encodedToken = "Aa%20Bb2B%Cc"
var url = $"http://localhost:4200/account/reset-password?email={email}&token={encodedToken}";
var mailContent= $"Please reset your password by <a href='{url}'>clicking here</a>.";
Now u can click on your link, and you will go to the good url with "+" (encode by %2B)... your token won't be invalid...
I had the same issue while hosting my website using Cloud Run in GCP, and none of the solutions here worked for me.
In my case the problem was with the data protection keys being stored locally in the instance. The following entry in the logs hinted the problem:
Storing keys in a directory '/home/.aspnet/DataProtection-Keys' that may not be persisted outside of the container. Protected data will be unavailable when container is destroyed.
Because of this, the tokens were only valid to the instance that issued them, which was already destroyed by the time the users clicked the link sent to their email.
The solution was to use a distributed storage for the keys, for instance, a database via EntityFramework:
using Microsoft.AspNetCore.DataProtection;
...
public void ConfigureServices(IServiceCollection services)
{
services.AddDataProtection()
.PersistKeysToDbContext<DbContext>();
services.AddIdentity<User, IdentityRole>()
.AddEntityFrameworkStores<AnotherDbContext>()
.AddDefaultTokenProviders();
}
The details of the different types of storages can be found here.
Had a similar issue for ASP Core 2.1, and was scratching my head, because any encoding/decoding for code(token) did not work for me. And I always had an Invalid token error for userManager.ConfirmEmailAsync(user, code)
SOLUTION:
Turned out that the issue was that the user was created not with UserManager but using dbcontext like _dbContext.Users.AddAsync after replacing this creation method with _userManager.CreateAsync everything worked fine for me even without any encoding/decoding for code(token).
In my case, this problem was due to OnPostAsync method in RegisterModel encoding the callback url:
var callbackUrl = Url.Page(
"/Account/ConfirmEmail",
pageHandler: null,
values: new { userId = user.Id, code = code },
protocol: Request.Scheme);
await _emailSender.SendEmailAsync(Input.Email, "Confirm your email",
$"Please confirm your account by <a href='{HtmlEncoder.Default.Encode(callbackUrl)}'>clicking here</a>.");
This encoding (the application of HtmlEncoder.Default.Encode() to callbackUrl), made the url '&' become '&', thus invalidating the whole link.
You can also use regex when verifying the token in the reset put.
var decode = token.Replace(" ", "+");
await _userManager.ResetPasswordAsync(user, decode, Password);
I used Google Cloud Storage JavaScript Client Library to upload a file to Google Cloud Storage. Then I want to get a public link that I want to share with my friends without needing Google account. I tried to reuse the JavaScript example with insertObject as the following codes:
var request = gapi.client.request({
'path': '/upload/storage/' + API_VERSION + '/b/' + BUCKET + '/o',
'method': 'POST',
'params': {'uploadType': 'multipart'},
'x-goog-acl','public-read',
'headers': {
'Content-Type': 'multipart/mixed; boundary="' + boundary + '"'
},
Upload successfully in my google cloud storage bucket (myphoto_upload). But I can not access via https://storage.cloud.google.com/myphoto_upload/brv_brown.png. I tried to replace 'x-goog-acl','public-read', with 'acl' : [{'entity': 'allUsers', 'role': 'READER'}],
OR 'body':{'entity': 'allAuthenticatedUsers', 'role': 'READER'}, But the result is the same. Thank for your help in advance.
First, you have a typo. It should be : instead of , after x-goog-acl.
Second, x-goog-acl is a header, so it it should be included in the headers.
I think your question involves you uploading a file to GCS and then allowing others to download a file. If you're asking about others anonymously uploading files to your bucket, that's a different matter. Let me know if I've misunderstood.
If you are programmatically generating public links to objects, the easiest way is to just use one of these two URL patterns:
https://storage.googleapis.com/myphoto_upload/brv_brown.png
https://myphoto_upload.storage.googleapis.com/brv_brown.png
Or, as code:
"https://storage.googleapis.com/" + bucket_name + "/" + object_name
As long as the ACL contains allUsers:READER, those URLs will work fine anonymously.
For this library : https://github.com/GoogleCloudPlatform/google-cloud-php#google-cloud-storage-ga
Use as following :
use Google\Cloud\Storage\StorageClient;
$storage = new StorageClient([
'projectId' => '123456789' // use your own project ID
]);
//this can be created with other ENV mode server side
putenv('GOOGLE_APPLICATION_CREDENTIALS='.dirname(__FILE__) . '/gauth.json');
/*
* For public access :
* https://storage.googleapis.com/[BUCKET_NAME]/[FILE_NAME].png
*/
$bucket->upload(
fopen('data3.txt', 'r'), ['predefinedAcl' => 'publicRead']
);
In my application I am doing client side and server side validation. Client side validation is pretty easy and user will not be able to click on the submitForm button. But suppose the zip code entered does not match with the order number and server is giving me error in 200 response in angular $http call promise. How can I show the server side error and maintain my validation ?
{"result":{"success":false,"code":null,"message":"The entered Zip does not match with Order"},"errors":
[]
Might not be the exact answer but could help :
1) On the server side if there are errors you should first stop the script from reaching to database.
2) Populate error messages on the server side in an array in the form of associative array, which can contain field name as key and error message as value. Later send the array to client side in JSON format.
3) On the client side loop through the JSON object to initialise the scope variables that will contain error messages per field.
May be in the format : serverErrors.fieldName.error = 'Error Message from server'
This should be a scope object. And the same object should be rendered on the template per field. So that when you loop through your JSON and assign error messages to each field it will show up in template. As Angular has two way binding.
This way you will be able to handle custom server side validations and show errors on client side. And this is possible not just in theory we have implemented this approach in one of our projects.
Hope that helps :)
I think you can try
$http.post('URL', { "ID": ID }).success(function (resultData) {
if (results.result.success) {
alert("scuess!");
$window.location.href = "/";
}
else {
alert(results.result.message)
}
});