S3 folder listing fails when >1000 files

Sure. This is what we can see on the dreamfactory.log after sending a single request via API docs:


[2017-01-04 12:39:03] local.INFO: [REQUEST] {"API Version":"2.0","Method":"GET","Service":"s3","Resource":"myfolder/"} 
[2017-01-04 12:39:03] local.DEBUG: [REQUEST] {"Parameters":"{\"include_properties\":\"false\",\"include_folders\":\"true\",\"include_files\":\"true\",\"full_tree\":\"false\",\"zip\":\"false\"}","API Key":"xxxxxxx","JWT":"....."} 
[2017-01-04 12:39:03] local.DEBUG: API event handled: s3.{folder_path}.get.pre_process  
[2017-01-04 12:44:03] local.INFO: [REQUEST] {"API Version":"2.0","Method":"GET","Service":"s3","Resource":"myfolder/"} 
[2017-01-04 12:44:03] local.DEBUG: [REQUEST] {"Parameters":"[]","API Key":"xxxxx","JWT":"......."} 
[2017-01-04 12:44:03] local.DEBUG: API event handled: s3.{folder_path}.get.pre_process  
[2017-01-04 12:45:43] local.ERROR: Symfony\Component\Debug\Exception\FatalErrorException: Maximum execution time of 120 seconds exceeded in /opt/bitnami/apps/dreamfactory/htdocs/vendor/aws/aws-sdk-php/src/Api/Parser/XmlParser.php:132
Stack trace:
#0 {main}  
[2017-01-04 12:50:44] local.ERROR: Symfony\Component\Debug\Exception\FatalErrorException: Maximum execution time of 120 seconds exceeded in /opt/bitnami/apps/dreamfactory/htdocs/vendor/aws/aws-sdk-php/src/Api/Parser/XmlParser.php:39
Stack trace:
#0 {main}  

The log level configuration in .env is DF_LOG_LEVEL=DEBUG and the version is DreamFactory 2.4.1 (Bitnami image).

Any clue?