I am getting this error while importing database to my local machine.I am using WAMPServer.Can anyone please help me?
Fatal error: Maximum execution time of 360 seconds exceeded in
C:\wamp\apps\phpmyadmin4.1.14\libraries\dbi\DBIMysqli.class.php on line 285
and on 285 line return mysqli_query($link, $query, $method);
Thanks
Option 1
Use the MySQL console
Go to wampmanager -> MySQL -> MySQL Console or cmd mysql.exe -u root on windows from the mysql folder
USE YourDatabase;
SOURCE C:/yourpath/file.sql;
Option 2
Modify phpmyadmin.conf (alias folder)
php_admin_value upload_max_filesize 128M
php_admin_value post_max_size 128M
php_admin_value max_execution_time 360
php_admin_value max_input_time 360
</Directory>
Change the size's to what you want
Sample values? (Depends on your need)
post_max_size = 750M
upload_max_filesize = 750M
max_execution_time = 5000
max_input_time = 5000
memory_limit = 1000M
For answer completeness
-->If the above doesnt work (it should) then go ahead and
Add
$cfg['ExecTimeLimit'] = <LargeValue>(5000-6000?);
to phpMyAdmin\libraries\config.inc.php .
Dont edit the config.default.php directly.
Try adding this at the beginning of your php file
ini_set('MAX_EXECUTION_TIME', -1);
If you are using WAMP and if the problem is because of it
Increase the max_execution_time in php.ini file present in phpmyadmin\apache2 then go to
C:\wamp\apps\phpmyadmin3.4.10.1\libraries (or change path according to your installation)
and open the config.default.php and change value for $cfg['ExecTimeLimit'] to 0:
$cfg['ExecTimeLimit'] = 0;
this should resolve your issue.
Related
I try to setup Xdebug for shopware-docker without success.
VHOST_[FOLDER_NAME_UPPER_CASE]_IMAGE=ghcr.io/shyim/shopware-docker/6/nginx:php74-xdebug
After replacing your Folder Name and running swdc up Xdebug should be activated.
Which folder name should I place?
Using myname, the same name as in /var/www/html/myname, return error on swdc up myname:
swdc up myname
[+] Running 2/0
⠿ Network shopware-docker_default Created 0.0s
⠿ Container shopware-docker-mysql-1 Created 0.0s
[+] Running 1/1
⠿ Container shopware-docker-mysql-1 Started 0.3s
.database ready!
[+] Running 0/1
⠿ app_myname Error 1.7s
Error response from daemon: manifest unknown
EDIT #1
With this setup VHOST_MYNAME_IMAGE=ghcr.io/shyim/shopware-docker/6/nginx:php81-xdebug (versioned Xdebug) the app started:
// $HOME/.config/swdc/env
...
VHOST_MYNAME_IMAGE=ghcr.io/shyim/shopware-docker/6/nginx:php81-xdebug
But set a debug breakpoint (e.g. in index.php), nothing happens
EDIT #2
As #Alex recommend, i place xdebug_break() inside my code and it works.
Stopping on the breakpoint the debugger log aswers with hints/warnings like described in the manual:
...
Cannot find a local copy of the file on server /var/www/html/%my_path%
Local path is //var/www/html/%my_path%
...
click on Click to set up path mapping to open the modal
click inside modal select input Use path mapping (...)
input field File path in project response with undefined
But i have already set up the mapping like described in the manual, go to File | Settings | PHP | Servers:
Why does not work my mapping? Where failed my set up?
The path mapping needs to be between your local project path on your workstation and the path inside the docker containers. Without xDebug has a hard time mapping the breakpoints from PHPStorm to the actual code inside the container.
If mapping the path correctly does not work and if its a possibility for you, i can highly recommend switching to http://devenv.sh for your development enviroment. Shopware itself promotes this new enviroment in their documentation: https://developer.shopware.com/docs/guides/installation/devenv and provides an example on how to enable xdebug:
# devenv.local.nix File
{ pkgs, config, lib, ... }:
{
languages.php.package = pkgs.php.buildEnv {
extensions = { all, enabled }: with all; enabled ++ [ amqp redis blackfire grpc xdebug ];
extraConfig = ''
# Copy the config from devenv.nix and append the XDebug config
# [...]
xdebug.mode=debug
xdebug.discover_client_host=1
xdebug.client_host=127.0.0.1
'';
};
}
A correct path mapping should not be needed here, as your local file location is the same for XDebug and your PHPStorm.
When trying to use the hub.load function from tensorflow_hub, I get an OSError: SavedModel file does not exist at: error.
The weird thing is that it worked a few days ago, so I don't quite understand why I'm getting this error now.
Code to reproduce:
import tensorflow as tf
import tensorflow_hub as hub
URL = 'https://tfhub.dev/google/universal-sentence-encoder/4'
embed = hub.load(URL)
Specific error received:
OSError Traceback (most recent call last)
<ipython-input-11-dfb80f0299b2> in <module>
1 URL = 'https://tfhub.dev/google/universal-sentence-encoder/4'
----> 2 embed = hub.load(URL)
~/opt/anaconda3/lib/python3.7/site-packages/tensorflow_hub/module_v2.py in load(handle, tags)
100 if tags is None and is_hub_module_v1:
101 tags = []
--> 102 obj = tf_v1.saved_model.load_v2(module_path, tags=tags)
103 obj._is_hub_module_v1 = is_hub_module_v1 # pylint: disable=protected-access
104 return obj
~/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/saved_model/load.py in load(export_dir, tags)
576 ValueError: If `tags` don't match a MetaGraph in the SavedModel.
577 """
--> 578 return load_internal(export_dir, tags)
579
580
~/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/saved_model/load.py in load_internal(export_dir, tags, loader_cls)
586 tags = nest.flatten(tags)
587 saved_model_proto, debug_info = (
--> 588 loader_impl.parse_saved_model_with_debug_info(export_dir))
589
590 if (len(saved_model_proto.meta_graphs) == 1 and
~/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/saved_model/loader_impl.py in parse_saved_model_with_debug_info(export_dir)
54 parsed. Missing graph debug info file is fine.
55 """
---> 56 saved_model = _parse_saved_model(export_dir)
57
58 debug_info_path = os.path.join(
~/opt/anaconda3/lib/python3.7/site-packages/tensorflow/python/saved_model/loader_impl.py in parse_saved_model(export_dir)
111 (export_dir,
112 constants.SAVED_MODEL_FILENAME_PBTXT,
--> 113 constants.SAVED_MODEL_FILENAME_PB))
114
115
OSError: SavedModel file does not exist at: /var/folders/77/rvfl368x44s51r8dc3b6l2rh0000gn/T/tfhub_modules/063d866c06683311b44b4992fd46003be952409c/{saved_model.pbtxt|saved_model.pb}
So, just deleting that folder and running the hub.load() function again solves the issue
I have tried the above solution, but it didn't work for me...
Here's what worked:
Download the model from tfhub.dev with assets, variables and .pb checkpoint file.
Make sure you import tensorflow_text module!
import tensorflow_text
Specify the downloaded folder path in the hub.load() statement as in:
model =
hub.load("/Users/bilguun/Desktop/universal-sentence-encoder-multilingual-large_3/")
I was using i3d = hub.load(https://tfhub.dev/deepmind/i3d-kinetics-400/1).signatures['default'] as shown here
This method of loading the model worked in that day but after a few days I got the same error: OSError: SavedModel file does not exist at: C:\Users\catal\AppData\Local\Temp\tfhub_modules\092225fb776e28d6d64ac605ab6be03f18dd2027{saved_model.pbtxt|saved_model.pb}
After doing some research, I understood that the saved_model file location (specified in the error) was temporary so even if that folder still exists, there is no more saved_model.pb in it. So I downloaded the model linked here: https://tfhub.dev/deepmind/i3d-kinetics-400/1 and set i3d = hub.load("C:\\absolute_path_to_saved_model_folder").signatures['default'] and it worked.
So using Bilguun's answer really helped in my case.
This kind of error happens because the downloaded model is saved in temporary folder created by the application.
Since it is a temporary folder, the model saved in it may gets deleted or even the folder can be deleted.
When calling the hub.load() command, the programme checks for the temporary folder, if the folder is not found, the program will download the data from internet.
But if the folder is found and the data(or model) is not there, instead of downloading from the internet, it raises an error.
It can be resolved by deleting the temporary folder completely.
In Mac the temporary folder can be accessed by the command "open $TMPDIR" on the terminal( reference : https://osxdaily.com/2018/08/17/where-temp-folder-mac-access/).
The name of the folder can be get from the raised error and can be deleted easily .
We use the client_body_in_file_only option with nginx, to allow file upload via Ajax. The config looks like this:
location ~ ^(\/path1|\path2)$ {
limit_except POST { deny all; }
client_body_temp_path /path/to/app/tmp;
client_body_in_file_only on;
client_body_buffer_size 128K;
client_max_body_size 1000M;
#this option is a quick hack to make sure files get saved on (ie this type of request goes to) on a specific server
proxy_pass http://admin;
proxy_pass_request_headers on;
proxy_set_header X-FILE $request_body_file;
proxy_set_body off;
proxy_redirect off;
# might not need?
proxy_read_timeout 3m;
}
This works, but the web server process (Mongrel) that handles the request has to sudo the temp file that comes through in headers['X-FILE'], before it can do anything with it. This is because the temp file comes through with 600 permissions.
I'm not happy with this approach, which requires us to edit the /etc/sudoers file to allow the web server user to do sudo chmod without a password. It feels very unsecure.
Is there a way, with the nginx config, to change the permissions on the temp file that is created, eg to 775?
EDIT: I just tried changing the value of the umask option in the nginx init config, then restarting nginx, but it didn't help. It had been at 0022, I changed it to 0002. In both cases it comes through with 600 permissions.
EDIT2: I also tried adding this line under the proxy_redirect line, in the nginx config.
proxy_store_access user:rw group:rw all:r;
But, it didn't make any difference - it still just has user:rw
Looking through the nginx source, it appears that the only mechanism that would modify the permissions of the temporary file is the request_body_file_group_access property of the request, which is consulted in ngx_http_write_request_body():
if (r->request_body_file_group_access) {
tf->access = 0660;
}
But even that limits you to 0660 and it seems that it is not a user-settable property, only being utilized by the ngx_http_dav module.
The permissions are ultimately set in ngx_open_tempfile(), where they default to 0600:
fd = open((const char *) name, O_CREAT|O_EXCL|O_RDWR, access ? access : 0600);
So it seems that there is currently no configuration-based solution. If you're willing/able to build nginx from source, one possibility is to apply a simple patch to set the permissions to whatever you want in ngx_http_write_request_body():
+ tf->access = 0644;
+
if (r->request_body_file_group_access) {
tf->access = 0660;
}
rb->temp_file = tf;
I tested this and obtained the following, the first file having been uploaded without the modification, and the second file with it:
$ ls -al /tmp/upload/
total 984
drwxr-xr-x 2 nobody root 12288 Feb 18 13:42 .
drwxrwxrwt 16 root root 12288 Feb 18 14:24 ..
-rw------- 1 nobody nogroup 490667 Feb 18 13:40 0000000001
-rw-r--r-- 1 nobody nogroup 490667 Feb 18 13:42 0063184684
It seems, that it is not possible at the moment to configure the file permissions, but there is an official feature request.
The file permissions are always 0600 making the application unable to read the file at all. [...] This is currently an unsupported scenario: [Nginx] creates the temporary file with the default permissions [...] 0600 (unless request_body_file_group_access is set - but unfortunately that property is not settable).
The ticket was opened in October 2018 with minor priority.
I've been working with WAMP for 2 years now and it's the first time I got this problem. I created a new website base with Symfony, and now I'm adding some files to it in Windows (by creating a bundle in a console) but it doesn't appear in the browser in localhost even I refresh it, so when I go in /web, I got those errors like these :
( ! ) Fatal error: Uncaught Error: Class 'SNS\PlatformBundle\SNSPlatformBundle' not found in D:\wamp\www\sns_symfony\sns_symfony\app\AppKernel.php on line 20
( ! ) Error: Class 'SNS\PlatformBundle\SNSPlatformBundle' not found in D:\wamp\www\sns_symfony\sns_symfony\app\AppKernel.php on line 20
Call Stack
# Time Memory Function Location
1 0.0010 385736 {main}( ) ...\app.php:0
2 0.0070 418000 AppKernel->handle( ) ...\app.php:19
3 0.0070 418000 AppKernel->boot( ) ...\Kernel.php:195
4 0.0070 418000 AppKernel->initializeBundles( ) ...\Kernel.php:132
5 0.0070 417952 AppKernel->registerBundles( ) ...\Kernel.php:492
Can someone help me please ? ^^'
I'll explain myself more. I used the bundle generator of Symfony so I dind't write anything, juste used the console. By the way there is some folders that WAMP can't see (I don't see them in the browser in localhost) and the file he's looking for are in those folders he can't see. There is the problem.
First of all, double check if the bundle really exists on your hard drive. You're on Windows, so just go to D:\wamp\www\sns_symfony\sns_symfony\src and check if there's a PlatformBundle\SNSPlatformBundle.php in your src dir. If not - now you know the generator didn't generate anything. Maybe accidentally aborted?
Then check if you have right PSR-0 or PSR-4 (most likely ) namespace in your composer.json file. You can run php composer validate to see warnings.
And as the last step run composer dump-autoload which updates the autoload file
Finally I found the solution after hours of deeps researches. Here it is :
Create the bundle
Edit your app/config/config.yml file like (add templating: engines ['twig'] in framework:)
framework:
templating:
engines: ['twig']
Thanks for help people ! :D
SOLVED !
I want to prepare my application to be compatible with many databases types. To try it i've used H2, MySql and Postgresql. So 'ive added into build.sbt :
"mysql" % "mysql-connector-java" % "5.1.35",
"org.postgresql" % "postgresql" % "9.4-1201-jdbc41"
and i've added conf/prod.conf with all configuration without database configuration, and 3 files:
conf/h2.conf
include "prod.conf"
db.h2.driver=org.h2.Driver
db.h2.url="jdbc:h2:mem:dontforget"
db.h2.jndiName=DefaultDS
ebean.h2="fr.chklang.dontforget.business.*"
conf/mysql.conf
include "prod.conf"
db.mysql.driver=com.mysql.jdbc.Driver
db.mysql.jndiName=DefaultDS
ebean.mysql="fr.chklang.dontforget.business.*"
conf/postgresql.conf
include "prod.conf"
db.postgresql.driver=org.postgresql.Driver
db.postgresql.jndiName=DefaultDS
ebean.postgresql="fr.chklang.dontforget.business.*"
Add to it i've three folders into conf/evolutions with
evolutions/h2
evolutions/mysql
evolutions/postgresql
with these things user can start my application with this command :
-Dconfig.file=dontforget-conf.conf -DapplyEvolutions.default=true -Dhttp.port=10180 &
And this conf file is
include "postgresql.conf"
db.postgresql.url="jdbc:postgresql:dontforget"
db.postgresql.user=myUserName
db.postgresql.password=myPassword
But with this configuration, when my application try to connect to DB :
The default EbeanServer has not been defined? This is normally set via the ebean.datasource.default property. Otherwise it should be registered programatically via registerServer()]]
So i've tried to add, into my configuration :
ebean.datasource.default=postgresql
but when i add it i've :
Configuration error: Configuration error[Configuration error[]]
at play.api.Configuration$.play$api$Configuration$$configError(Configuration.scala:94)
at play.api.Configuration.reportError(Configuration.scala:743)
at play.Configuration.reportError(Configuration.java:310)
at play.db.ebean.EbeanPlugin.onStart(EbeanPlugin.java:56)
at play.api.Play$$anonfun$start$1$$anonfun$apply$mcV$sp$1.apply(Play.scala:91)
at play.api.Play$$anonfun$start$1$$anonfun$apply$mcV$sp$1.apply(Play.scala:91)
at scala.collection.immutable.List.foreach(List.scala:383)
at play.api.Play$$anonfun$start$1.apply$mcV$sp(Play.scala:91)
at play.api.Play$$anonfun$start$1.apply(Play.scala:91)
at play.api.Play$$anonfun$start$1.apply(Play.scala:91)
at play.utils.Threads$.withContextClassLoader(Threads.scala:21)
at play.api.Play$.start(Play.scala:90)
at play.core.StaticApplication.<init>(ApplicationProvider.scala:55)
at play.core.server.NettyServer$.createServer(NettyServer.scala:253)
at play.core.server.NettyServer$$anonfun$main$3.apply(NettyServer.scala:289)
at play.core.server.NettyServer$$anonfun$main$3.apply(NettyServer.scala:284)
at scala.Option.map(Option.scala:145)
at play.core.server.NettyServer$.main(NettyServer.scala:284)
at play.core.server.NettyServer.main(NettyServer.scala)
Caused by: Configuration error: Configuration error[]
at play.api.Configuration$.play$api$Configuration$$configError(Configuration.scala:94)
at play.api.Configuration.reportError(Configuration.scala:743)
at play.api.db.BoneCPApi.play$api$db$BoneCPApi$$error(DB.scala:271)
at play.api.db.BoneCPApi$$anonfun$getDataSource$3.apply(DB.scala:438)
at play.api.db.BoneCPApi$$anonfun$getDataSource$3.apply(DB.scala:438)
at scala.Option.getOrElse(Option.scala:120)
at play.api.db.BoneCPApi.getDataSource(DB.scala:438)
at play.api.db.DB$$anonfun$getDataSource$1.apply(DB.scala:142)
at play.api.db.DB$$anonfun$getDataSource$1.apply(DB.scala:142)
at scala.Option.map(Option.scala:145)
at play.api.db.DB$.getDataSource(DB.scala:142)
at play.api.db.DB.getDataSource(DB.scala)
at play.db.DB.getDataSource(DB.java:25)
at play.db.ebean.EbeanPlugin.onStart(EbeanPlugin.java:54)
So i don't understand how i can do it.
YES!!! I've found it! After debug mode (etc...)
There was 2 problems.
First problem : I must add a key into my application.conf :
ebeanconfig.datasource
For me (for exemple), postgresql.conf is modified to :
db.postgresql.driver=org.postgresql.Driver
db.postgresql.jndiName=DefaultDS
ebean.postgresql="fr.chklang.dontforget.business.*"
ebeanconfig.datasource.default=postgresql
Second problem : include into play 2.3.x don't works because conf folder isn't added into classpath (ref Load file from '/conf' directory on Cloudbees ) so we must concat prod.conf, postgresql.conf and dontforget.conf into an only single file.
I hope i have helped any other developper...