I've just working on Google Drive API. I have one problem, it's too slow. I use methods like in the Documentation. For example:
List<File> getFilesByParrentId(String Id, Drive service) throws IOException {
Children.List request = service.children().list(Id);
ChildList children = request.execute();
List<ChildReference> childList = children.getItems();
File file;
List<File> files = new ArrayList<File>();
for (ChildReference child : childList) {
file = getFilebyId(child.getId(), service);
if (file == null) {
continue;
} else if (file.getMimeType().equals(FOLDER_IDENTIFIER)) {
System.out.println(file.getTitle() + " AND "
+ file.getMimeType());
files.add(file);
}
}
return files;
}
private File getFilebyId(String fileId, Drive service) throws IOException {
File file = service.files().get(fileId).execute();
if (file.getExplicitlyTrashed() == null) {
return file;
}
return null;
}
QUESTION: that method works, but too slow, for about 30 second.
How can I optimize this? For example, not to get all files (Only folder, or only files). or something like that.
you can use the q parameter and some stuff like :
service.files().list().setQ(mimeType != 'application/vnd.google-apps.folder and 'Id' in parents and trashed=false").execute();
This will get you all the files that are not folder, not trashed and whose parent has the id Id. All in one request.
And BTW, the API is not slow. Your algorithm, which makes too many of request, is.
public void getAllFiles(String id, Drive service) throws IOException{
String query="'"+id + "'"+ " in parents and trashed=false and mimeType!='application/vnd.google-apps.folder'";
FileList files = service.files().list().setQ(query).execute();
List<File> result = new ArrayList<File>();
Files.List request = service.files().list();
do {
result.addAll(files.getItems());
request.setPageToken(files.getNextPageToken());
} while (request.getPageToken() != null && request.getPageToken().length() > 0);
}
Related
I'm trying to resolve a problem with the search bar. It works but the problem is that if I press two keys almost at the same time, the app will only search the words with the first key pressed.
Here are the logs:
In this one, it works when I press the P then R:
[EDT] 0:4:9,283 - p
[EDT] 0:4:9,348 - 10
[EDT] 0:4:9,660 - pr
[EDT] 0:4:9,722 - 3
The second one doesn't because I press P and R nearly at the same time:
[EDT] 0:4:35,237 - p
[EDT] 0:4:35,269 - pr
[EDT] 0:4:35,347 - 0
[EDT] 0:4:35,347 - 10
The logs here are generated to show the String searched and the result size. As you can see, the first case get results before typing the next char and the second case got all results when the two chars are typed.
The main problem is that in the second case, results from the 'p' String are shown instead of those of 'pr'.
I'm using the searchbar from the Toolbar API with addSearchCommand and an InfiniteContainer to show result data.
Could it be a problem in the order of the events from the addSearchCommand are treated ?
EDIT: Here is the client side code. Server side it's just a simple rest service call which fetch the data from the database.
public static ArrayList<Patient>getSearchedPatient(int index,int amount, String word)
{
ArrayList<Patient> listPatient = null;
Response reponse;
try {
reponse = RestManager.executeRequest(
Rest.get(server + "/patients/search")
.queryParam("index", String.valueOf(index))
.queryParam("amount", String.valueOf(amount))
.queryParam("word", word),
RequestResult.ENTITIES_LIST,
Patient.class);
listPatient = (ArrayList<Patient>)reponse.getResponseData();
Log.p(""+listPatient.size());
} catch (RestManagerException e) {
LogError("", e);
}
return listPatient;
}
private static Response executeRequest(RequestBuilder req, RequestResult type, Class objectClass) throws RestManagerException
{
Response response = null;
try {
switch (type) {
case BYTES:
response = req.getAsBytes();
break;
case JSON_MAP:
response = req.acceptJson().getAsJsonMap();
break;
case ENTITY:
response = req.acceptJson().getAsProperties(objectClass);
break;
case ENTITIES_LIST:
response = req.acceptJson().getAsPropertyList(objectClass);
break;
default:
case STRING:
response = req.getAsString();
break;
}
} catch (Exception e) {
log().error("Erreur à l'exécution de la requête", e);
response = null;
}
if(response == null)
return null;
return response;
}
So the trick here is a simple one. Don't make a request... Most users type fast enough to saturate your network connection speed so you will see completion suggestions referring to things that are no longer relevant.
This is a non-trivial implementation which I discuss in-depth in the Uber book where such a feature is implemented.
The solution is to send a request after a delay while caching responses to avoid double requests and ideally canceling request in progress when applicable. The solution in the Uber book does all 3 I'll try to cover just the basics in this mockup code. First you need a field for the timer and current request. Ideally you would also have a Map containing cached data:
private UITimer delayedRequest;
private String currentSearch;
private Map<String, String> searchCache = new HashMap<>();
Then you need to bind a listener like this:
tb.addSearchCommand(e -> {
String s = (String)e.getSource();
if(s == null) {
if(delayedRequest != null) {
delayedRequest.cancel();
delayedRequest = null;
}
return;
}
if(currentSearch != null && s.equals(currentSearch)) {
return;
}
if(delayedRequest != null) {
delayedRequest.cancel();
delayedRequest = null;
}
currenSearch = s;
delayedRequest = UITimer.timer(100, false, () -> {
doSearchCode();
});
});
I didn't include here usage of the cache which you need to check within the search method and fill up in the result code. I also didn't implement canceling requests already in progress.
When trying to run the code above I'm getting javax.naming.OperationNotSupportedException with the message:
[LDAP: error code 12 - 00000057: LdapErr: DSID-0C09079A, comment: Error processing control, data 0, v2580].
The first page is successfully retrieved and the exception is thrown only at second loop iteration.
public void pagedResults() {
PagedResultsCookie cookie = null;
SearchControls searchControls = new SearchControls();
searchControls.setSearchScope(SearchControls.SUBTREE_SCOPE);
int page = 1;
do {
logger.info("Starting Page: " + page);
PagedResultsDirContextProcessor processor = new PagedResultsDirContextProcessor(20, cookie);
List<String> lastNames = ldapTemplate.search("", initialFilter.encode(), searchControls, UserMapper.USER_MAPPER_VNT, processor);
for (String l : lastNames) {
logger.info(l);
}
cookie = processor.getCookie();
page = page + 1;
} while (null != cookie.getCookie());
}
However, when I remove Spring LDAP using pure implementation as above, it works!
try {
LdapContext ctx = new InitialLdapContext(env, null);
// Activate paged results
int pageSize = 5;
byte[] cookie = null;
ctx.setRequestControls(new Control[] { new PagedResultsControl(pageSize, Control.CRITICAL) });
int total;
do {
/* perform the search */
NamingEnumeration results = ctx .search("",
"(&(objectCategory=person)(objectClass=user)(SAMAccountName=vnt*))",
searchCtls);
/* for each entry print out name + all attrs and values */
while (results != null && results.hasMore()) {
SearchResult entry = (SearchResult) results.next();
System.out.println(entry.getName());
}
// Examine the paged results control response
Control[] controls = ctx.getResponseControls();
if (controls != null) {
for (int i = 0; i < controls.length; i++) {
if (controls[i] instanceof PagedResultsResponseControl) {
PagedResultsResponseControl prrc = (PagedResultsResponseControl) controls[i];
total = prrc.getResultSize();
if (total != 0) {
System.out.println("***************** END-OF-PAGE "
+ "(total : " + total
+ ") *****************\n");
} else {
System.out.println("***************** END-OF-PAGE "
+ "(total: unknown) ***************\n");
}
cookie = prrc.getCookie();
}
}
} else {
System.out.println("No controls were sent from the server");
}
// Re-activate paged results
ctx.setRequestControls(new Control[] { new PagedResultsControl(
pageSize, cookie, Control.CRITICAL) });
} while (cookie != null);
ctx.close();
} catch (NamingException e) {
System.err.println("PagedSearch failed.");
e.printStackTrace();
} catch (IOException ie) {
System.err.println("PagedSearch failed.");
ie.printStackTrace();
}
Any hints?
The bad thing about LDAP paged results is that they only work if the same underlying connection is used for all requests. The internals of Spring LDAP get a new connection for each LdapTemplate operation, unless you use the transactional support.
The easiest way to make sure the same connection will be used for a sequence of LDapTemplate operations is to use the transaction support, i.e. configure transactions for Spring LDAP and wrap the target method with a Transactional annotation.
I managed to make my example above work using SingleContextSource.doWithSingleContext approach.
However my scenario is different, my app is service oriented and the paged results as well as the cookie should be sent to an external client so that he decides to request next pages or not.
So as far as I can tell, spring-ldap does not support such case. I must use pure implementation so that I can keep track of the underlying connection during requests. Transaction support could help as well as SingleContextSource, but not among different requests.
#marthursson
Is there any plan in spring ldap to such support in the future?
I found I could use your first example (Spring) as long as I set the ignorePartialResultException property to true in my ldapTemplate configuration and put the #Transactional on my method as suggested.
you can replace ldapTemplate DirContext like this
ldapTemplate.setContextSource(new SingleContextSource(ldapContextSource().getReadWriteContext()));
Has anyone EVER managed to use a windows 8 app to copy files from a unc dir to a local dir ?
According to the official documentation here
It is possible to connect to a UNC path
I am using the std FILE ACCESS sample and have changed one line of code to read as below
I have added all the capabilities
Added .txt as a file type
The UNC path is read write to everyone and is located on the same machine..
But I keep getting Access Denied Errors.
Can anyone possibly provide me with a working example
This is driving me mad and really questioning the whole point of win 8 dev for LOB apps.
TIA
private async void Initialize()
{
try
{
//sampleFile = await Windows.Storage.KnownFolders.DocumentsLibrary.GetFileAsync(filename);
string myfile = #"\\ALL387\Temp\testfile.txt";
sampleFile = await Windows.Storage.StorageFile.GetFileFromPathAsync(myfile);
}
catch (FileNotFoundException)
{
// sample file doesn't exist so scenario one must be run
}
catch (Exception e)
{
var fred = e.Message;
}
}
I have sorted this out and the way I found best to do it was to create a folder object
enumnerate over the files in the folder object
copy the files one at a time to the local folder then access them
It seems that you can't open the files, but you can copy them. ( which was what I was trying to achieve in the first place )
Hope this helps
private async void Initialize()
{
try
{
var myfldr = await Windows.Storage.StorageFolder.GetFolderFromPathAsync(#"\\ALL387\Temp");
var myfiles = await myfldr.GetFilesAsync();
foreach (StorageFile myfile in myfiles)
{
StorageFile fileCopy = await myfile.CopyAsync(KnownFolders.DocumentsLibrary, myfile.Name, NameCollisionOption.ReplaceExisting);
}
var dsd = await Windows.Storage.KnownFolders.PicturesLibrary.GetFilesAsync();
foreach (var file in dsd)
{
StorageFile sampleFile = await Windows.Storage.StorageFile.GetFileFromPathAsync(file.Path);
}
}
catch (FileNotFoundException)
{
// sample file doesn't exist so scenario one must be run
}
catch (Exception e)
{
var fred = e.Message;
}
}
i have some problem using JSCH to retrieve files/folders and populate them in JTree.
In JSCH to list files using :
Vector list = channelSftp.ls(path);
But i need that lists as java.io.File type. So i can get absolutePath and fileName,
And i don't know how to retrieve as java.io.File type.
Here is my code, and i try it work for local directory.
public void renderTreeData(String directory, DefaultMutableTreeNode parent, Boolean recursive) {
File [] children = new File(directory).listFiles(); // list all the files in the directory
for (int i = 0; i < children.length; i++) { // loop through each
DefaultMutableTreeNode node = new DefaultMutableTreeNode(children[i].getName());
// only display the node if it isn't a folder, and if this is a recursive call
if (children[i].isDirectory() && recursive) {
parent.add(node); // add as a child node
renderTreeData(children[i].getPath(), node, recursive); // call again for the subdirectory
} else if (!children[i].isDirectory()){ // otherwise, if it isn't a directory
parent.add(node); // add it as a node and do nothing else
}
}
}
Please help me, thanks before :)
you can define some variable in your java bean like
Vector<String> listfiles=new Vector<String>(); // getters and setters
Vector list = channelSftp.ls(path);
setListFiles(list); // This will list the files same as new File(dir).listFiles
in JSCH you can absolute path using ChannelSftp#realpath but unfortunately there no way to get Exact file with extension .But you can use something like this to check whether that file name existed in the target directory or not.
SftpATTRS sftpATTRS = null;
Boolean fileExists = true;
try {
sftpATTRS = channelSftp.lstat(path+"/"+"filename.*");
} catch (Exception ex) {
fileExists = false;
}
Try this (linux in remote server):
public static void cargarRTree(String remotePath, DefaultMutableTreeNode parent) throws SftpException {
//todo: change "/" por remote file.separator
Vector<ChannelSftp.LsEntry> list = sftpChannel.ls(remotePath); // List source directory structure.
for (ChannelSftp.LsEntry oListItem : list) { // Iterate objects in the list to get file/folder names.
DefaultMutableTreeNode node = new DefaultMutableTreeNode(oListItem.getFilename());
if (!oListItem.getAttrs().isDir()) { // If it is a file (not a directory).
parent.add(node); // add as a child node
} else{
if (!".".equals(oListItem.getFilename()) && !"..".equals(oListItem.getFilename())) {
parent.add(node); // add as a child node
cargarRTree(remotePath + "/" + oListItem.getFilename(), node); // call again for the subdirectory
}
}
}
}
After you can invoque this method as:
DefaultMutableTreeNode nroot = new DefaultMutableTreeNode(sshremotedir);
try {
cargarRTree(sshremotedir, nroot);
} catch (SftpException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
yourJTree = new JTree(nroot);
I'm trying to figure out the best way to store user uploaded files in a file system. The files range from personal files to wiki files. Of course, the DB will point to those files by someway which I have yet to figure out.
Basic Requirements:
Fairy Decent Security so People Can't Guess Filenames
(Picture001.jpg, Picture002.jpg,
Music001.mp3 is a big no no)
Easily Backed Up & Mirrorable (I prefer a way so I don't have to copy the entire HDD every single time I want to backup. I like the idea of backing up just the newest items but I'm flexible with the options here.)
Scalable to millions of files on multiple servers if needed.
One technique is to store the data in files named after the hash (SHA1) of their contents. This is not easily guessable, any backup program should be able to handle it, and it easily sharded (by storing hashes starting with 0 on one machine, hashes starting with 1 on the next, etc).
The database would contain a mapping between the user's assigned name and the SHA1 hash of the contents.
Guids for filenames, automatically expanding folder hierarchy with no more than a couple of thousand files/folders in each folder. Backing up new files is done by backing up new folders.
You haven't indicated what environment and/or programming language you are using, but here's a C# / .net / Windows example:
using System;
using System.IO;
using System.Xml.Serialization;
/// <summary>
/// Class for generating storage structure and file names for document storage.
/// Copyright (c) 2008, Huagati Systems Co.,Ltd.
/// </summary>
public class DocumentStorage
{
private static StorageDirectory _StorageDirectory = null;
public static string GetNewUNCPath()
{
string storageDirectory = GetStorageDirectory();
if (!storageDirectory.EndsWith("\\"))
{
storageDirectory += "\\";
}
return storageDirectory + GuidEx.NewSeqGuid().ToString() + ".data";
}
public static void SaveDocumentInfo(string documentPath, Document documentInfo)
{
//the filestream object don't like NTFS streams so this is disabled for now...
return;
//stores a document object in a separate "docinfo" stream attached to the file it belongs to
//XmlSerializer ser = new XmlSerializer(typeof(Document));
//string infoStream = documentPath + ":docinfo";
//FileStream fs = new FileStream(infoStream, FileMode.Create);
//ser.Serialize(fs, documentInfo);
//fs.Flush();
//fs.Close();
}
private static string GetStorageDirectory()
{
string storageRoot = ConfigSettings.DocumentStorageRoot;
if (!storageRoot.EndsWith("\\"))
{
storageRoot += "\\";
}
//get storage directory if not set
if (_StorageDirectory == null)
{
_StorageDirectory = new StorageDirectory();
lock (_StorageDirectory)
{
string path = ConfigSettings.ReadSettingString("CurrentDocumentStoragePath");
if (path == null)
{
//no storage tree created yet, create first set of subfolders
path = CreateStorageDirectory(storageRoot, 1);
_StorageDirectory.FullPath = path.Substring(storageRoot.Length);
ConfigSettings.WriteSettingString("CurrentDocumentStoragePath", _StorageDirectory.FullPath);
}
else
{
_StorageDirectory.FullPath = path;
}
}
}
int fileCount = (new DirectoryInfo(storageRoot + _StorageDirectory.FullPath)).GetFiles().Length;
if (fileCount > ConfigSettings.FolderContentLimitFiles)
{
//if the directory has exceeded number of files per directory, create a new one...
lock (_StorageDirectory)
{
string path = GetNewStorageFolder(storageRoot + _StorageDirectory.FullPath, ConfigSettings.DocumentStorageDepth);
_StorageDirectory.FullPath = path.Substring(storageRoot.Length);
ConfigSettings.WriteSettingString("CurrentDocumentStoragePath", _StorageDirectory.FullPath);
}
}
return storageRoot + _StorageDirectory.FullPath;
}
private static string GetNewStorageFolder(string currentPath, int currentDepth)
{
string parentFolder = currentPath.Substring(0, currentPath.LastIndexOf("\\"));
int parentFolderFolderCount = (new DirectoryInfo(parentFolder)).GetDirectories().Length;
if (parentFolderFolderCount < ConfigSettings.FolderContentLimitFolders)
{
return CreateStorageDirectory(parentFolder, currentDepth);
}
else
{
return GetNewStorageFolder(parentFolder, currentDepth - 1);
}
}
private static string CreateStorageDirectory(string currentDir, int currentDepth)
{
string storageDirectory = null;
string directoryName = GuidEx.NewSeqGuid().ToString();
if (!currentDir.EndsWith("\\"))
{
currentDir += "\\";
}
Directory.CreateDirectory(currentDir + directoryName);
if (currentDepth < ConfigSettings.DocumentStorageDepth)
{
storageDirectory = CreateStorageDirectory(currentDir + directoryName, currentDepth + 1);
}
else
{
storageDirectory = currentDir + directoryName;
}
return storageDirectory;
}
private class StorageDirectory
{
public string DirectoryName { get; set; }
public StorageDirectory ParentDirectory { get; set; }
public string FullPath
{
get
{
if (ParentDirectory != null)
{
return ParentDirectory.FullPath + "\\" + DirectoryName;
}
else
{
return DirectoryName;
}
}
set
{
if (value.Contains("\\"))
{
DirectoryName = value.Substring(value.LastIndexOf("\\") + 1);
ParentDirectory = new StorageDirectory { FullPath = value.Substring(0, value.LastIndexOf("\\")) };
}
else
{
DirectoryName = value;
}
}
}
}
}
SHA1 hash of the filename + a salt (or, if you want, of the file contents. That makes detecting duplicate files easier, but also puts a LOT more stress on the server). This may need some tweaking to be unique (i.e. add Uploaded UserID or a Timestamp), and the salt is to make it not guessable.
Folder structure is then by parts of the hash.
For example, if the hash is "2fd4e1c67a2d28fced849ee1bb76e7391b93eb12" then the folders could be:
/2
/2/2f/
/2/2f/2fd/
/2/2f/2fd/2fd4e1c67a2d28fced849ee1bb76e7391b93eb12
This is to prevent large folders (some Operating Systems have trouble enumarating folders with a million of files, hence making a few subfolders for parts of the hash. How many levels? That depends on how many files you expect, but 2 or 3 is usually reasonable.
Just in terms of one aspect of your question (security): the best way to safely store uploaded files in a filesystem is to ensure the uploaded files are out of the webroot (i.e., you can't access them directly via a URL - you have to go through a script).
This gives you complete control over what people can download (security) and allows for things such as logging. Of course, you have to ensure the script itself is secure, but it means only the people you allow will be able to download certain files.
Expanding on Phill Sacre's answer, another aspect of security is to use a separate domain name for uploaded files (for instante, Wikipedia uses upload.wikimedia.org), and make sure that domain cannot read any of your site's cookies. This prevents people from uploading a HTML file with a script to steal your users' session cookies (simply setting the Content-Type header isn't enough, because some browsers are known to ignore it and guess based on the file's contents; it can also be embedded in other kinds of files, so it's not trivial to check for HTML and disallow it).