blank images when logging in one account through vscode ssh-remote - vscode-remote

If I log in one user account through vscode ssh-remote and open any image, it can be loaded but become blank\
However, if I log in another account and open any image at the same time, both the images can be loaded and viewed correctly.\
No special extension, no special ssh configuration。
hope to vdiew images through ssh normally

Related

Cocoa sandbox - reopen files when restarting app

I have a sandboxed Cocoa app. It has a one window, multi-tabbed UI. I don't use NSDocument, but it is an app that can edit multiple "documents". When the app closes I save a list of the open documents. When I restart the apps, I try to re-open them in tabs.
This works fine when all the document files are in the sandboxed documents directory. However, users can also open files outside the sandbox. When the app is restarted, these files cannot be opened because of sandbox permissions, the file is not readable. I do understand that normally the user has to chose the out-of-sandbox file from an open dialog.
The files are in the "Recent Files" list and can be opened that way.
There must be a way to do this since that is how most text editors work. Is there a magic entitlement or call I'm missing?
You need security-scoped bookmarks.
Check out the Security-Scoped Bookmarks and Persistent Resource Access section of the Apple's App Sandbox Design Guide.
Your app’s access to file-system locations outside of its container—as
granted to your app by way of user intent, such as through
Powerbox—does not automatically persist across app launches or system
restarts. When your app reopens, you have to start over.
...
Starting in OS X v10.7.3, you can retain access to file-system resources by
employing a security mechanism, known as security-scoped bookmarks,
that preserves user intent.
I'd summarize it here, but the above link has everything you need.

Using WebDav & RoboCopy to copy docs between Document Libraries

I am trying to automate the process of copying documents between SharePoint 2013 document libraries using a RoboCopy job which calls WebDav.
Batch only works if first manual connection operation was conducted on the intranet. As long as the user's session is active, does it work .
This is the command I am running:
robocopy "Source" "Destination" "Log File" /MOV
And these are the error messages I'm receiving:
-Exception Message :Access Denied. Before opening files in this location, you must first add the web site to your trusted sites list, browse to the web site, and select the option to login automatically.
StackTrace : at system.IO.__Error.WinIOError(Int32, errorCode, String maybeFullPath)
at system.IO.Directory.InternalCreateDirectory(String fullPath, String Path, Object dirSecurityObj, Boolean checkHost)
at system.IO.Directory.InternalCreateDirectoryHelper(String Path, Boolean checkHost)
at system.IO.Directory.CreateDirectory(String Path)**
Any ideas how this can be done?
I had slimier issues with mapping network drives over the internet using WebDav in both Explorer and Robocopy: It would work after a user session was started and the username/password was entered, but it wouldn't save the passwords even when asked to, so would stop working at the next logon. The root cause was two-fold:
1) By design on Windows, You can't save webdav credentials to non-local sites (See this MS KB)
2) You need to add the URL to the local sites list.
As an attempted fix, try this:
1) Create a Multi String Value at HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WebClient\Parameters called AuthForwardServerList
2) Add the URL's you are trying to connect to - using SharePoint, that may be a bit trickier for you to figure out than in my case, but as an example, here's the string I needed to enter:
https://*.[Website].com
http://*.[website].com
*.[Website].com
Optional: while in the WebClient Parameters, you may want to adjust FileSizeLimitInByte (To enable longer downloads before timeout) and [FileAttributesLimitInBytes][Can't link more than two URL's, but it's KB912152] (To support large numbers of files in a folder) - I set mine to:
"FileSizeLimitInBytes"=dword:80000000 (Hex)
"FileAttributesLimitInBytes"=dword:00989680 (Hex)
3) Open Internet Explorer, and click Tools, Internet Options
4) Click the Security Tab, then click Local Internet, then sites
5) Click Advance on local internet settings
6) Enter the full https URL of the site, and click Add, then click Close
7) Click OK, Click OK, then close Internet Explorer
8) Restart the WebClient Service
9) Open Windows Explorer, and click Map Network Drive
10) Pick a drive letter, Enter the folder as the URL, tick Re-connect at Sign-in, and tick Connect using different Credentials. Click OK.
11) Enter the username and password, and tick the remember my credentials box, and click OK
Hope some of that helps.

In Silverlight 5, how to get Directory name of a file without Running application Out Of Browser?

In Silverlight 5, I am performing File operations using Open File Dialog. I want to read the Directory Name of the selected file from the Open File Dialog box. But I am getting error "File operation not permitted. Access to path '' is denied." How to solve this issue ?
It is working fine when I opt for "Running application Out Of Browser with elevated trust". But I don't want to run my app outside of the browser. So my problem is to get the Directory name of the selected file without making application to run out of browser. Please help!
You could try to run Silverlight 5 in the browser with elevated trust.
This is no guarantee that it will work because the Silverlight app will still be subject to the restrictions imposed by the security settings of the browser.

only logged in users can play audio from our server

We have made a silverlight application where users can preview audio files from their browser from the telerik radmediaplayer control.
The files are on a webserver and anyone who sniffs the trafic can download the file.
We would like to prevent non-logged-in users from accessing/downloading these files.
Besides providing the application with some sort of temporary valid url and implementing a custom httphandler... what are our options?
It's not too big of a problem if our customers can download the files, we just don't want the rest of the world to also have access.
Any ideas would be more than welcome!
[Update]
The only thing I can come up with is:
host the files in a non-public folder
if a user requests to prelisten a file, copy it to a public folder under a new name ([guid].mp3) and return it's url
every x minutes clean the public folder.
Don't let the web server serve up the files straight out of a directory. Put part of your application in front, and left one of your server-side scripts serve up these files. Keep the raw audio files out of the web root.
For instance, your client-side application would access files like so:
http://someserver/yourscript?audio_asset_id=12345
The code at yourscript would verify the session data, ensuring that a user is logged in, would then go figure out the real path to asset ID 12345, and echo its contents to the client. Don't forget to include the proper Content-Type header as well.
Once the accessing of these assets is under your control, you can implement whatever security measures you like. If your sessions area already pretty well safe-guarded, this should be fine. I would also recommend implementing sane quotas. If you get 100 requests on an asset using the same session ID from multiple IP addresses... something isn't right.

What setting in IIS that could cause XAP to download everytime when access the page from fresh IE?

I have a sample program which does nothing but Hello World. I open IE and go to my development environement and access the silverlight, it loads the XAP first time and then if I close IE and open again, XAP does not get downloaded. Since there are no changes I expect it not to download.
After deploying it in QA environement, I open IE for the first time, it load XAP as expected. Now close IE and open again, I expect it not to download XAP but it does download XAP again. But if I would do refresh on the page it does not download XAp. So this happens only on Fresh IE open in our QA environment.
All the above tests are done on the same box with same IE setting. So there is no client side IE cache issue. I did check the date and Time on the servers to see if there is any difference as specified in silverlight XAP gets downloaded everytime
link and our servers are in same date and time.
Does any one know how to prevent IIS not to download everytime?
The default settings in IE mean that a fresh instance of IE will always attempt to fetch each unique URL when it is first encountered. IE does this even if the cache headers sent with the resource the last time it was fetched would indicate the resource is still fresh.
However IE will send If-Modified-Since and/or If-None-Match when it re-requests a resource that it has a copy of in its cache. Hence the server has the option of responding with 304 Not Modified, are you sure that is not happening? The 304 has no entity body and is therefore a cheap response.
Note also that IE can make some strange heuristic choices if the server fails to send any cache control headers with a resource. One of these choices is where the resource is quite large no caching is performed.
If you haven't already done so I would recommend you set some reasonable Expiration on the ClientBin folder in IIS Manager (in IIS7 select the ClientBin folder, select "HTTP Response Header", open "Set Common Headers..", enable Expire Web content.

Resources