Blog Layout

WEEK 12 and the conclusion of Module 2

Alexander Clemens • May 14, 2023

Share

This week 12 marks the finish of Module 2 and transitioning into our 4-5 person group projects that will be worked on throughout Module 3. This week we had our module 2 test that covered Domain Driven Design and Aggregate, pulling and pub/sub-methods, the differences between using direct DOM programming in Javascript, microservice integrations, and docker drawbacks, using React for front-end benefits, Python and Javascript coding challenges. 


For example, key takes ways I had from this week were the following:


What is the Aggregate in Domain-Driven Design, and how it helps a team model a problem domain? 


I define an aggregate as a collection of domain objects grouped as a single unit or a bigger whole. The purpose that aggregates beneficial in the design and archetype of a system is how they help make complex systems easier to create, design and perform maintenance to improve the code's lifecycle. Code outside of the aggregate will interact with the aggregate using the aggregate root. An example that was given to us in our Module 2 around week seven or eight from my memory is when our instructor first mentioned aggregates; he used the shopping cart and its methods associated with the shopping cart class would be an example. In addition, the shopping cart is an example of the aggregate root, and the aggregate source can only have one aggregate. If we wanted to change items in the cart, we would have to go through the aggregate root to access and change. 


Defining the similarities and differences between the messaging methods for integrating microservices of polling and pub/sub. 


For the similarities between Polling and pub/sub that I see:

-Both are messaging services that ensure transfer of data/messages can exist 

-Both ensure that a user will receive the data/message 

-Both send messages where the messenger doesn't have to wait for a response. This is known as decoupling in that the server side that is sending the message doesn't need to know the details about the receiver/subscriber, and for Polling, the client doesn't need to see how the server is working 

-Both help with scaling your application and handling messages/data updates to your users 


For the differences between Polling and pub/sub that I see:

-Size of subscribers/users. Polling is more useful when the system/app is for a smaller user base (for example, of the Module 2 project) and more inefficient with a large user base due to the mechanics of the user/subscriber needing to initiate the request from the publisher/sender. Pub/sub is a more practical implementation for systems/apps with large userbase/subscribers and will be a better alternative to ensure peak performance. 

-The time of communication. Pub/sub sends out requests that need response times generated immediately. An example of a pub/sub-system is Apache Kafka. Polling is used in building systems where the response time back doesn't have to be immediate. 

-Respective methods of implementation. Pub/sub is known as a push method. The messenger will send/push messages to their subscribers even when the subscriber hasn't requested to receive them. Polling uses a pulling form where the subscriber requests the information/data from the publisher. 


What would be the drawbacks to using Docker when developing from my experience? 


I've seen several drawbacks to using Docker as your development environment. The first would be, depending on your operating system; it can be a hassle to work with and not a smooth set-up. I am referring to if you're using a Windows operating system, but it will work and be figured out. In addition, using Docker as your development system takes time to understand how to use the working parts and then incorporate it as your tool to handle a full stack build-out of our backend, front end, messaging service, and database used in your project. Now to get more to my technical observations and experience. This is an issue when understanding how data is stored in my Docker container and what happens when the container isn't working and is shut down. I know the data is stored in my volumes; still, when continually needing to rebuild position in the development phase of a project, it seems like it could be smoother. Also, consider isn't completely safe when you need to delete containers or shutdown docker. In addition, from the perspective of security Docker is suitable for handling large-scale Docker environments that can monitor all the moving pieces effectively without any holes. 


After completing our module 2 test this week, we moved on to set goals for our project Beta partner project. For this, I focused on improving the user experience and making sure all the forms and buttons were precise or, in other words, there were no errors in the console and the data was being posted successfully. By the end of the week, I had gotten everything except one item to work that I wanted, but overall a very focused week on understanding how my Django models connect to my views and how changes/updates in my database for one microservice communicate that change to another database in another microservice by using the poller.py file and then being able to reference what I have built on the backend through React and display this on the front end led to a productive week of understanding. As long as I work/practice/train on improving my skills daily, I am improving. I write this as my coding could be better, and I often need to fully 100% understand how it works under the hood. But attitude, consistency, healthy habits, and mindset will dictate what one creates for themselves in this field and industry. Now enough with the personal development perspective. Moving on to another stretch goal for this week.


I decided to follow my interest in harnessing the powers of the Google Cloud Vision API and have been able to execute the following functions when uploading photos successfully: 

Detecting crop hints, seeing faces, detecting handwriting, detecting image properties, detecting logos, detecting multiple objects, detecting text in an image from my writing, displaying what is most likely an image with a probability, and returning other links of where that photo can be found, label detection and landmark detection. All of these functions can be viewed on my gitlab repository here, and I plan to take these functions and transitions them to a front-end buildup for myself and the user to interact with them and experience them easily. This experience demonstrated my ability to execute API calls by first reading effectively and understanding the API documentation for Google, then taking the code provided and tweaking it to output that I can understand. The google cloud vision API is a powerful machine learning model with the capital of taking images and returning data that can be used to continually expand this technical revolution of what computers are capable of. 


Going into week 13, I will be traveling with my girlfriend to Brooklyn in New York City for a conference that my girlfriend is working at, and I'm very excited for her to have this opportunity and looking forward to staying disciplined with the Hack Reactor course and working from a different view and environment in New York City. Next week we will start our larger group projects, and I look forward to integrating FastAPI, a new database, and building out an idea from scratch with a team and making it come to life. Moving onto Module 3, I see that the course is seven weeks away from finishing, and I'm happy to have more time to dedicate to focus on networking and outreach on applications for companies that I would like to provide my services. On to week 13!


Ready to work with Xander Clemens?

I'd be happy to discuss your project and how we can work together to create unique, fun and engaging content.

Go ahead and click here to be taken to my business service page and see what I can do for you. Book a call with me now. Looking forward to chatting soon!

Book A Call
Share by: