How to run Dreamfactory to achieve HA

Hi All,

I have planned to run multiple dream-factory servers to achieve High availability and
I have been using Remote hybrid database servers for my application(MySQL and MongoDB).

Could somebody please give some tips or more information to architect my application for production environment. - I am confused to design my application architecture.
App built in (framework used-Angular 2,DB - (Mysql and mongodb), AWS cloud.

Thanks in Advance.

Best Regards

Hello @Madhu_N

I have a scenario similar to what you described.
I also use AWS cloud, and the only differences from what you intend to do is that instead of remote DB services, I’m using AWS RDS (for mysql) and AWS DynamoDB instead in Mongo.

To have a high availability environment, you need to have at least 2 availability zones running your entire infrastructure, and to be scalable you need to separate the application layers into standalone services.

Speaking of DF, you need to divide it into at least 4 layers to keep it scalable horizontally (pre-requisite for high availability)

1 - Databases (AWS RDS / DynamoDB)
2 - Internal cache (AWS ElastiCache)
3 - Filesystem (AWS S3)
4 - Web application (AWS EC2 / ECS)

Each layer has the ability to scale independently, and doing so with AWS managed services, makes it easy to make each layer available in a multi AZ scenario, and saves setup and management work.

The Mysql Database in RDS has vertical / horizontal scaling capability (read replicas) and full support for multi zones with auto failover. The same occurs with ElastiCache (Redis / Memcached) and with S3 (storing frontend static and user files). In this way, your web application (PHP) will be free to scale without worrying about layers of persistence, having fully ephemeral instances. DreamFactory is perfect for this, the standard JWT tokens that it uses makes this work a lot easier.

Finally, you should configure the AWS ELB (Elastic Load Balance) optionally with Launch Configurations
and Auto Scaling Groups to maintain and split the web load between your EC2 / ECS instances.

I also recommend that you consider putting the AWS CloudFront (CDN) in front of your ELB, being the public layer of your DF API. This reduces the latency in the delivery of requests, reduces your internal traffic and improves the experience of your users.

If you have any further questions, post to us.

Best Regards
Junior Conte

Hi @juniorconte,

Thank you so much for the information,
I keep posted for the further doubts.

Best Regards

Hi @juniorconte,

I came up with many confusions and doubts ,

Here is my ruff designed architecture in attached image and please review it and please guide me how to use and where to use and what !!

Here is my application use cases.

1 - I have to use multiple DF instance( 1,2 in different zones stand alone with auto-scaling groups)
2 - I am using Remote MySQL master-slave replication (1 and 2 in different zones to achieve HA) and MongoDB 3 Node replication(to achieve HA and auto-fail over )

Here is my confusions(I am newer to DF platform ) and requests

1 - when I use multiple DF in different Zones
- How and where to use Load Balances(before and after DF layer as in attached image ) and
type of LB.
2 - where to use Memcache
3 - what all kind of security we can provide to whole set of infrastructure.
4 - Any good infrastructure plan or changes to the infra as in given image?

Could you please suggest me.

Thanks & Regards

Hi @Madhu_N

1 - If you do not have another API to expose through a web service through DF, then you do not need an internal LB.

2 - The DF uses a cache system internally to handle user sessions, this cache can be of different types (File, Redis or Memcached). Since the goal is to have an epemera application instance, consider using Redis or Memcached. If I am not mistaken, the DF is not able to handle memcached cluster, so the only way out to use auto-fail over would be through ElastiCache with Redis on M3 instances with Primary and Replication.

3 - AWS offers a wide range of security features, consider building a VPC, with at least 2 subnets (1 for each AZ), and security groups. This will allow you to create a real isolation by exposing only the ELB to outputs in

4 - I drew the architecture I had described earlier, there are similarities to the one you created. Backups are already part of the RDS and Dynamo services, being optional to move them to S3 (as you did). CloudWatch can be used at various levels of the infra, SES, SNS and SQS can also be connected as services in the DF and consumed through automatically generated APIs.

If you have any further questions, let me know.
See you.

@Madhu_N Maybe you’ve seen, but the core team of the DF has made available a whitepaper on Scalability, can be useful for your study.


Thanks for the information,
I get back to you for any further information.

Best Regards

Good comments. I agree dividing it into 4 layers which is what we’d normally do in our production environment also.

1 - Databases (AWS RDS / DynamoDB)
2 - Internal cache (AWS ElastiCache)
3 - Filesystem (AWS S3)
4 - Web application (AWS EC2 / ECS)