Hi,
Our DreamFactory v2.2.0 is integrated with Amazon S3 via an “AWS S3”-type service, configured with a certain S3 container (bucket).
In that bucket there’s a folder (let’s say "my_folder"
) and inside that folder there’s a growing number of image files.
The DF endpoint to list the files in the folder (/api/v2/s3/my_folder
) has been working fine until the file count exceeded around 1000 images. Now the server response is:
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>503 Service Unavailable</title>
</head><body>
<h1>Service Unavailable</h1>
<p>The server is temporarily unable to service your
request due to maintenance downtime or capacity
problems. Please try again later.</p>
<p>Additionally, a 503 Service Unavailable
error was encountered while trying to use an ErrorDocument to handle the request.</p>
</body></html>
And in the server logs the following message is printed:
[2016-12-27 16:13:45] local.ERROR: exception 'Symfony\Component\Debug\Exception\FatalErrorException' with message 'Maximum execution time of 500 seconds exceeded' in /opt/bitnami/apps/dreamfactory/htdocs/vendor/aws/aws-sdk-php/src/Api/AbstractModel.php:0
Stack trace:
#0 {main}
We found this topic in the forum which may be related, though is not connected to AWS S3.
We read that the list objects operation of the S3 REST APIis limited to 1000 records by default, but in any case I understand that the endpoint should at least return those 1000 records and a reference to continue fetching.
Is this a bug or a known limitation in DF 2.2.0?
Is there any way to workaround it?
Thanks in advance.
We’ve tested with DF 2.4.1 with the same outcome (503 Service Unavailable).
Is this a bug, @benbusse?
@rbarriuso can you turn on debugging and try to use the S3 service in question and tell me what the error logs say?
Sure. This is what we can see on the dreamfactory.log
after sending a single request via API docs:
[2017-01-04 12:39:03] local.INFO: [REQUEST] {"API Version":"2.0","Method":"GET","Service":"s3","Resource":"myfolder/"}
[2017-01-04 12:39:03] local.DEBUG: [REQUEST] {"Parameters":"{\"include_properties\":\"false\",\"include_folders\":\"true\",\"include_files\":\"true\",\"full_tree\":\"false\",\"zip\":\"false\"}","API Key":"xxxxxxx","JWT":"....."}
[2017-01-04 12:39:03] local.DEBUG: API event handled: s3.{folder_path}.get.pre_process
[2017-01-04 12:44:03] local.INFO: [REQUEST] {"API Version":"2.0","Method":"GET","Service":"s3","Resource":"myfolder/"}
[2017-01-04 12:44:03] local.DEBUG: [REQUEST] {"Parameters":"[]","API Key":"xxxxx","JWT":"......."}
[2017-01-04 12:44:03] local.DEBUG: API event handled: s3.{folder_path}.get.pre_process
[2017-01-04 12:45:43] local.ERROR: Symfony\Component\Debug\Exception\FatalErrorException: Maximum execution time of 120 seconds exceeded in /opt/bitnami/apps/dreamfactory/htdocs/vendor/aws/aws-sdk-php/src/Api/Parser/XmlParser.php:132
Stack trace:
#0 {main}
[2017-01-04 12:50:44] local.ERROR: Symfony\Component\Debug\Exception\FatalErrorException: Maximum execution time of 120 seconds exceeded in /opt/bitnami/apps/dreamfactory/htdocs/vendor/aws/aws-sdk-php/src/Api/Parser/XmlParser.php:39
Stack trace:
#0 {main}
The log level configuration in .env
is DF_LOG_LEVEL=DEBUG
and the version is DreamFactory 2.4.1 (Bitnami image).
Any clue?
Do you know if anybody else experienced this problem, @mattschaer? Thanks
No news about this? Should I open a bug report?
Hello. I’m stuck with that issue too. In AWS S3, our bucket has more than 1000 files, and we get no response from server.
the log shows this:
[2017-03-17 22:38:25] local.ERROR: exception ‘Symfony\Component\Debug\Exception\FatalErrorException’ with message ‘Maximum execution time of 30 seconds exceeded’ in /var/www/html/dreamfactory/vendor/aws/aws-sdk-php/src/Api/Parser/XmlParser.php:81
Stack trace:
#0 {main}
Hi all,
Has there been any progress on addressing this issue?
Thanks,
Michael