Sankar Kumar Kvs

project image

React Query is the best thing that happened to React

Background

React Query is a powerful data-fetching library, has emerged as a game-changer in React development. It simplifies data management, optimizing network requests, and enhancing caching strategies.

Link to the package — https://www.npmjs.com/package/@tanstack/react-query

Is it worth shifting from Redux to react-query??? Lets find out!!!

How does it make a difference?

  1. Effortless Data Fetching:
  • Without React Query: Manually handling async calls with fetch or axios.
  • With React Query: Use useQuery for auto-fetching, reducing code complexity.
// Without React Query
fetch('/api/data').then(response => response.json()).then(data => ...);

// With React Query
const { data } = useQuery('data', () => fetch('/api/data')
.then(response => response.json())
);

2. Automatic Caching:

  • Without React Query: Implement custom caching mechanisms.
  • With React Query: Caching is built-in and updates automatically.
// Without React Query
const cachedData = localStorage.getItem('cachedData');

// With React Query
const { data } = useQuery(['items'], fetchData, { enabled: isConnected });

3. Real-time Updates:

  • Without React Query: Complex setup using Web Sockets and manual state management.
  • With React Query: Seamless integration of real-time features.
// Without React Query
socket.on('dataUpdate', updatedData => ...);

// With React Query
const { data } = useQuery('data', fetchData);

4. Global State Management:

  • Without React Query: Combine Context or other state management libraries.
  • With React Query: Use hooks for data and global state.
// Without React Query
const { data, dispatch } = useContext(AppContext);

// With React Query
const { data, mutate } = useQuery(['items'], fetchData);

5. Network Request Optimization:

  • Without React Query: Manually manage network requests and debounce.
  • With React Query: Automatic query optimization
// Without React Query
const debouncedFetch = debounce(fetchData, 300);

// With React Query
const { data } = useQuery(['items'], fetchData, {
refetchOnWindowFocus: false
}
);

6. Server Pagination & Infinite Loading:

  • Without React Query: Handle pagination manually with complex logic.
  • With React Query: Streamlined infinite loading and pagination.
// Without React Query
const loadMore = () => fetchMoreData(pageNumber);

// With React Query
const { data, fetchNextPage } = useInfiniteQuery(['items'], fetchPageData);

7. Error Handling & Retrying:

  • Without React Query: Implement error handling and retry logic manually.
  • With React Query: Built-in error handling and retrying.
// Without React Query
fetch('/api/data').catch(error => retry(fetchData, 3, error));

// With React Query
const { data } = useQuery(['items'], fetchData, { retry: 3 });

8. Developer-Friendly DevTools:

  • Without React Query: Rely on browser developer tools for insights.
  • With React Query: Debug with dedicated React Query DevTools.

To see in action, here is an example of implementing displaying of data with pagination using react query.

Example:

Let's assume you have a set of data with pagination and filters. On clicking tabs you change filter, you have a dropdown to select nunber of items to display, you have previous and next buttons.

Implementing this using react-query:

const Component = () => {

const [filterA, setFilterA] = React.useState<boolean>(false);
const [page, setPage] = React.useState<number>(1);
const [pageSize, setPageSize] = React.useState<number>(10);

const GetData= async (filters) => {
try {
const filterString = filters
? "?" + filters.map((filter) => filter.key + "=" + filter.value).join("&")
: "";
const url = "/api/data/" + filterString;
const res = await axios.get(url);
return res.data;
} catch (err) {
throw err;
}
};

const {
data,
status,
error
} = useQuery(["data", filterA, page, pageSize], () =>
GetData([
{
key: "filter_a",
value: filterA,
},
{
key: "page",
value: page,
},
{
key: "page_size",
value: pageSize,
},
])
);

return (
<Tabs>
<TabList>
<Tab
onClick={() => {
setFilterA(false);
setPage(1);
}}
>
Tab 1
</Tab>
<Tab
onClick={() => {
setFilterA(true);
setPage(1);
}}
>
Tab 2
</Tab>
<Flex>
<Button
onClick={() => setPage(page - 1)}
disabled={page === 1}
>
Previous
</Button>
<Text>
{page}
</Text>
<Button
onClick={() => setPage(page + 1)}
disabled={data?.next === null}
>
Next
</Button>
<Select
value={pageSize}
onChange={(e) => setPageSize(Number(e.target.value))}
>
<option value={10}>10</option>
<option value={20}>20</option>
<option value={50}>50</option>
</Select>
</Flex>
</TabList>

<TabPanels>
<TabPanel>
{status === "loading" ? (
<ComponentLoader />
) : (
<DataTable data={data.results} />
)}
</TabPanel>
<TabPanel>
{status === "loading" ? (
<ComponentLoader />
) : (
<DataTable data={data.results} />
)}
</TabPanel>
</TabPanels>
</Tabs>
);
};

export default Component;

Note:

Like useEffect has dependencies array, u can treat the array in useQuery similarly, whenever any variable changes in the array, the API gets triggered again. The same info will be used to cache the data and remember the corresponding API for it.
useQuery(["data", a, b, c], () => GetData(a,b,c))

So here whenever the a, b, c variable changes it gets triggered again.

Conclusion:

Yes, it’s definitely worth it to shift from Redux to react-query. You would still need Redux if u want to communicate complex local data across the components.

Such use case when you would need Redux:

When u have multiple components, and you are toggling display of components on different button clicks in different components which doesn’t have same parents.

But you no longer need to store the API layer data using Redux, since everything gets handled by react-query.

A round of applause, dear readers, for reaching the end — if you enjoyed this article as much as a developer loves a well-commented code, give it a clap to brighten my day! 👏

redux-toolkit
react
react-query
react-hook-form
nextjs
project image

Different ways of reading files in AWS Lambda

AWS Lambda is a powerful serverless computing service that allows you to run your code without provisioning or managing servers. When working with Lambda, it’s common to encounter scenarios where you need to read files from various sources, such as Amazon S3 or other storage systems.

In this article, we’ll explore different methods for reading files in AWS Lambda, including reading text files, CSV files, and Parquet files.

The most common way the lambda read the files are on event bridge trigger or s3 trigger or just from a s3 location.

Example of an event bridge trigger on Put Object:

{
"version": "0",
"id": "12345678-1234-1234-1234-123456789012",
"detail-type": "AWS API Call via CloudTrail",
"source": "aws.s3",
"account": "123456789012",
"time": "2023-06-15T12:34:56Z",
"region": "us-west-2",
"resources": [],
"detail": {
"eventVersion": "1.08",
"eventTime": "2023-06-15T12:34:56Z",
"eventSource": "s3.amazonaws.com",
"eventName": "PutObject",
"awsRegion": "us-west-2",
"sourceIPAddress": "192.0.2.0",
"userAgent": "AWS SDK for Python",
"requestParameters": {
"bucketName": "my-s3-bucket",
"key": "path/to/my-file.csv"
},
"responseElements": {
"x-amz-request-id": "ABC123DEF456",
"x-amz-id-2": "a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6q7r8s9t0u1v2w3x4y5z"
},
"s3": {
"s3SchemaVersion": "1.0",
"configurationId": "MyEventConfig",
"bucket": {
"name": "my-s3-bucket",
"ownerIdentity": {
"principalId": "A1B2C3D4E5F6G7H8I9J0"
},
"arn": "arn:aws:s3:::my-s3-bucket"
},
"object": {
"key": "path/to/my-file.csv",
"size": 1024,
"eTag": "1234567890abcdef",
"versionId": "1a2b3c4d5e6f7g8h9i0j1k2l3m4n5o6p7q8r9s0t1u2v3w4x5y6z",
"sequencer": "A1B2C3D4E5F6G7H8I9J0K1L2M3N4O5P6Q7R8S9T0U1V2W3X4Y5Z"
}
},
"eventSource": "s3.amazonaws.com"
}
}

Reading on text files:

On S3 Trigger:

import boto3

s3_client = boto3.client('s3')

def lambda_handler(event, context):
# Get the bucket and object key from the S3 event
bucket = event['Records'][0]['s3']['bucket']['name']
object_key = event['Records'][0]['s3']['object']['key']

# Retrieve the file from S3
response = s3_client.get_object(Bucket=bucket, Key=object_key)

# Read the file contents as text
file_contents = response['Body'].read().decode('utf-8')

# Process the file contents as needed
# ...

On Event Bridge:

import boto3

s3_client = boto3.client('s3')

def lambda_handler(event, context):
# Extract the S3 bucket and object key from the event
bucket = event['detail']['requestParameters']['bucketName']
object_key = event['detail']['requestParameters']['key']

# Retrieve the file from S3
response = s3_client.get_object(Bucket=bucket, Key=object_key)

# Read the file contents as text
file_contents = response['Body'].read().decode('utf-8')

# Process the file contents as needed
# ...

Reading CSV files:

import boto3
import csv
s3_client = boto3.client('s3')
def lambda_handler(event, context):
# Extract the S3 bucket and object key from the event
bucket = event['detail']['requestParameters']['bucketName']
object_key = event['detail']['requestParameters']['key']

# Retrieve the file from S3
response = s3_client.get_object(Bucket=bucket, Key=object_key)

# Read the file contents as text
file_contents = response['Body'].read().decode('utf-8')

# Parse the CSV data
csv_data = csv.reader(file_contents.splitlines())

# Iterate over each row in the CSV data
for row in csv_data:
# Access the values in each column of the row
column1 = row[0]
column2 = row[1]
# ... process the data as needed

Reading Parquet files

import boto3
import pyarrow.parquet as pq
s3_client = boto3.client('s3')
def lambda_handler(event, context):
# Get the bucket and object key from the S3 event
bucket = event['Records'][0]['s3']['bucket']['name']
object_key = event['Records'][0]['s3']['object']['key']

# Retrieve the Parquet file from S3
response = s3_client.get_object(Bucket=bucket, Key=object_key)

# Read the Parquet file using pyarrow
parquet_file = response['Body'].read()
parquet_table = pq.read_table(parquet_file)

# Access the Parquet table data
# Example: Get a specific column
column_data = parquet_table.column('column_name')

# Process the Parquet data as needed
# ...

I hope this article provides you with a comprehensive understanding of different file reading methods in AWS Lambda and helps you leverage the full potential of serverless computing for your file processing needs.

Happy coding!

aws-lambda
file-reading
csv
parquet
aws
project image

Setup Apache Spark — Java in 5 minutes

Setup Apache Spark — Java in 5 minutes

Introduction

Apache Spark is an open-source data processing framework for big data applications. It provides a simple and fast way to process large amounts of data in a distributed environment. In this article, i will show you how to set up Apache Spark in 5 minutes using Docker and IntelliJ.

Prerequisites:

  1. Docker installed
  2. Intellij Installed

Configuration:

We will be utilizing the Spark 3.3.0 framework, optimized for Hadoop 3.3, with the use of OpenJDK 8 as our JDK.

1. Create a new project in Intellij:

  1. Name the project: Choose a meaningful and descriptive name for your project that accurately reflects its purpose.
  2. Select Java and Maven: When creating a new project in IntelliJ, select “Java” as the programming language and “Maven” as the build tool. Maven is a popular build tool for Java projects that provides a standard structure for your project, automates the build process, and manages dependencies.
  3. Download JDK 1.8 from IntelliJ itself: IntelliJ provides the option to download and install the Java Development Kit (JDK) required for your project. In this case, select JDK 1.8, which is a commonly used version of Java.
  4. Create the project: After specifying the project name, programming language, build tool, and JDK, click on the “Create” button to create the project. IntelliJ will generate the required files and directories for a basic Maven project, allowing you to get started with development right away.

Additional steps you may consider:

  1. Configure Maven: Before you start coding, make sure to configure Maven by specifying the project details in the “pom.xml” file, including the project name, version, and dependencies.
  2. Set up the development environment: Ensure that your development environment is set up correctly by verifying that the JDK and IntelliJ are installed and configured properly.

Once the project is created,

1. Add the below dependency in pom.xml

<!-- https://mvnrepository.com/artifact/org.apache.spark/spark-core -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.13</artifactId>
<version>3.3.0</version>
</dependency>

2. Create this docker compose file in your root folder: (reference)

NOTE: Create a /tmp/spark-events-local directory in root folder
version: '3'
services:
spark-master:
image: bde2020/spark-master:3.3.0-hadoop3.3
container_name: spark-master
ports:
- "8080:8080"
- "7077:7077"
environment:
- INIT_DAEMON_STEP=setup_spark
spark-worker-1:
image: bde2020/spark-worker:3.3.0-hadoop3.3
container_name: spark-worker-1
depends_on:
- spark-master
ports:
- "8081:8081"
environment:
- "SPARK_MASTER=spark://spark-master:7077"
spark-worker-2:
image: bde2020/spark-worker:3.3.0-hadoop3.3
container_name: spark-worker-2
depends_on:
- spark-master
ports:
- "8082:8081"
environment:
- "SPARK_MASTER=spark://spark-master:7077"
spark-history-server:
image: bde2020/spark-history-server:3.3.0-hadoop3.3
container_name: spark-history-server
depends_on:
- spark-master
ports:
- "18081:18081"
volumes:
- /tmp/spark-events-local:/tmp/spark-events

3. To test the working and setup of your application, you can paste the following code in your Main.java file, which counts the number of words in a text file:

public class Main {
public static void main(String[] args) {
SparkConf conf = new SparkConf().setAppName("SparkTest").setMaster("local");
JavaSparkContext sc = new JavaSparkContext(conf);
JavaRDD<String> rdd = sc.textFile("src/main/resources/test.txt");
long count = rdd.flatMap(line -> Arrays.asList(line.split(" ")).iterator())
.map(word -> word.replaceAll("[^a-zA-Z]", "").toLowerCase())
.filter(word -> !word.isEmpty())
.count();
System.out.println(count);
}
}

4. Run the docker compose file and then run the Main.java file

Congratulations! You have successfully set up a Apache Spark and tested it with a Java application.

java
apache-spark
docker-compose
maven
docker
project image

Deploying MERN App in Vercel

There are few things we need to configure to deploy a MERN application in Vercel:

  1. While creating vercel project, choose framework preset as other

2. Now declare static folders in express and endpoints to serve these static files, also initialize the CORS

Example template for index.js:

require("dotenv").config();
const path = require("path");
const express = require("express");
const app = express();
const mongoose = require("mongoose");
var cors = require("cors");
const { default: axios } = require("axios");
app.use(express.urlencoded({ limit: "50mb", extended: true }));
app.use(express.json({ limit: "50mb" }));
app.use(cors());
app.use((req, res, next) => {
res.header("Access-Control-Allow-Origin", "*");
res.header(
"Access-Control-Allow-Headers",
"Origin, X-Requested-With, Content-Type, Accept, Authorization,auth-token"
);
if (req.method === "OPTIONS") {
res.header("Access-Control-Allow-Methods", "PUT, POST, PATCH, DELETE, GET");
return res.status(200).json({});
}
next();
});
const connect = mongoose
.connect(process.env.MONGO_URI, {
useUnifiedTopology: true,
useNewUrlParser: true,
})
.then(() => console.log("Mondo db connected...."))
.catch((err) => console.log(err));
app.use("/api/org", require("./routes/org"));
app.use(express.static(path.join(__dirname, "./build")));
app.get("*", (req, res) => {
res.sendFile(path.resolve(__dirname, "./build/index.html"), function (err) {
if (err) {
res.status(500).send(err + "asasas not wokring");
}
});
});
const port = process.env.PORT || 5000;
app.listen(port, () => console.log(`Listening on port ${port}`));
module.exports = app;

3. Remove build/dist or any other static folders from .gitignore and push it to your GitHub repository.

4. Finally, add vercel.json in root directory

{
"builds": [
{
"src": "./index.js",
"use": "@vercel/node"
}
],
"routes": [{ "src": "/(.*)", "dest": "/" }]
}

5. Rest, everything is the same.

6. Now push the whole thing into your repo and deploy it in vercel.

Don’t forget to add your environmental variables in vercel dashboard.

If you encounter any issues, feel free to comment.

mern
vercel
deploy
project image

Create your first npm package in just 5 minutes

Do it right for the first time itself!!

A medium-api-npm package to read articles from medium and display it in your website and also post articles from your blog website to medium, which most bloggers for dream for

Create your npm account

  1. Visit here and create your npm account
  2. Once done creating go to your command prompt and type
npm adduser

3. Type your username and password and you are ready to start

What are we going to build ?

The script we are going to write always depends on what we actually are going to build.

For starters lets build a small yet significant npm package. People always want to display their medium article in their blog website and also post their articles from blog website to medium. Medium has made it possible through medium API. But they need to know to work with rest API which everyone doesn’t know. So let’s create a npm package which makes it simple.

Let’s get started !!!

  1. Create an empty git repository
  2. clone it and open it in your favorite code editor

Initialize npm

npm init
Add keywords so that it increases the coverage

Create the index.js the entry point of our package

This is the place where we write the code available for the user

Install the devDependencies needed

In my npm we need only Axios which is just a dev dependency

npm i -D axios

Let’s start to code

We need to get the medium article from user, to do that we need the integration token from medium

Medium→settings→integration token

First we are going to write the config object and get the user id which is needed to retrieve the data from user’s medium

async function getMediumArticles(options) {
const config = {
headers: {
"Host": "api.medium.com",
"Content-type": "application/json",
"Authorization": `Bearer ${options.auth}`,
"Accept": "application/json",
"Accept-Charset": "utf-8",
},
};
await axios.get("https://api.medium.com/v1/me", config).then((res) => {
userData = res.data.data;
});
await axios
.get(`https://api.medium.com/v1/users/${userData.id}/publications`,config)
.then((res) => {
userPublications= res.data.data;
});
return {userData,userPublications};
};

Posting an article to medium

To post from your blog website to medium also we need that auth token

async function addPost(options){
const config = {
headers: {
"Host": "api.medium.com",
"Content-type": "application/json",
"Authorization": `Bearer ${options.auth}`,
"Accept": "application/json",
"Accept-Charset": "utf-8",
},
};
await axios.get("https://api.medium.com/v1/me", config).then((res) => {
userData = res.data.data;
});
await axios.post(`https://api.medium.com/v1/users/${userData.id}/posts`,{
title: options.title,
contentFormat: "html",
content: options.html,
canonicalUrl: options.canonicalUrl,
tags: [options.tags],
publishStatus: options.publishStatus
},config).then((res)=>console.log(res.data)).catch(err=>console.log(err));
}

Final step

We need to export the functions we have made so that users can access them

module.exports={getMediumArticles,addPost}

Publishing your first npm

Type this in your command prompt and your package will be available for npm users

npm publish

Checkout the medium npm package here

medium
npm-package
npm
project image

React Server-Side Rendering

Make your React app more awesome. 😎😎😎

Why is this needed?

JavaScript’s frameworks, like react, brings a first-class user experience and also very developer-friendly. Beginners often build a complete website using starter kits like create-react-app and don’t realize the major issue until they go live.

If you have built a big website with many components and go to view page source, you don’t see any of them in it 🤔🤔 Why??

React Server-Side Rendering

Make your React app more awesome. 😎😎😎

Why is this needed?

JavaScript’s frameworks, like react, brings a first-class user experience and also very developer-friendly. Beginners often build a complete website using starter kits like create-react-app and don’t realize the major issue until they go live.

If you have built a big website with many components and go to view page source, you don’t see any of them in it 🤔🤔 Why??

React Server-Side Rendering

Make your React app more awesome. 😎😎😎

Why is this needed?

JavaScript’s frameworks, like react, brings a first-class user experience and also very developer-friendly. Beginners often build a complete website using starter kits like create-react-app and don’t realize the major issue until they go live.

If you have built a big website with many components and go to view page source, you don’t see any of them in it 🤔🤔 Why??

You can only see an empty div#root, which is obvious because react works on virtual DOM. The content of your website is actually rendered on the client. This is a huge issue for search engine optimization. 😰

💡 This can be solved by rendering the DOM on the server-side and sending it as a string to the client.

Many frameworks have implemented this for you, and one can use them directly, like Gatsby and Nextjs. However, in this article, I will be explaining how to implement SSR from scratch.

Assuming that you’re using many routes, redux, different styled-components, etc., let’s code accordingly to work for all kinds of react apps.

Create a server using express

Create a server.js file in the root directory of your app

Initialize an express app

const express = require("express");
const path = require("path");
const port = process.env.PORT || 8080;
const app = express();
app.use(express.static(path.resolve(__dirname, ".", "build")));
app.get("*", (req, res, next) => {
res.send("hello world");
});
app.listen(port,()=>{
console.log(`App started in port ${port}`);
});

Let’s get back to this after a while…

Changes to be done in created react app

Some changes need to be done in App.js and index.js

  1. Many people initialize their redux store in index.js, which is correct. But this creates a problem when we render App.js on the server-side. The components will not be able to access the Redux store if not

src/App.js

https://medium.com/media/cf2305dc784ea5b6ad85ff8617d2d494/href

src/index.js

https://medium.com/media/27752440a09c0a3a00722a950ce50bda/href

Here we wrap the App component with BrowserRouter on the client-side, not in App.js, because we need to import it in server.js where StaticRouter will be wrapped.

And instead of react-render, we use react-hydrate. Because React documentation says

hydrate() is Same as render() , but is used to hydrate a container whose HTML contents were rendered by ReactDOMServer . React will attempt to attach event listeners to the existing markup. React expects that the rendered content is identical between the server and the client.

Basically, it means that it hydrates the dried string sent from the server by making event listeners and all back alive.

2. App needs to be wrapped with two wrappers, BrowserRouter from ‘react-router-dom’ and StaticRouter from ‘react-router.’ This enables the routes to work client and server-side properly. For this, the client app should need to be wrapped in BrowserRouter, whereas server-side with StaticRouter.

This doesn’t mean that we will be having two App.js files. Everything will be just cleared in a moment

server.js

https://medium.com/media/be76425a75c8ded49aa6caef8a91c65b/href

After npm run build, we can find index.html in build folder with <div id=”root”></div> which we read using the fs module and fill it with <App/> rendered as a string, replace it, and send it. We normally wrap the App component with BrowserRouter, but we wrap it with StaticRouter, analogous and stateless on the server-side. It accepts two props: the location we get from req.url from express routes, and the other is context. This, we pass an empty object. Normally context is useful to store information regarding those routes and can be made available using StaticContext prop

src/Notfound.js

When a route is not found, render Notfound.js or any other component add this so that we can access the redirect URL and some information regarding it on the server-side

import React from 'react'; 
export default ({ staticContext = {} }) => {
staticContext.status = 404;
return <h1>Oops, nothing here!</h1>;
};

Add this in the server route

if (context.url) {
return res.redirect(301, context.url);
}

Now, if you run node server.js, you will get many errors 🤧

Since react is es6 syntax, browsers can’t render it since they use old normal JavaScript. Hence, we need babel and many presents to convert it back to older JavaScript. To do this, first, we need to add a webpack config. It needs many dev dependencies like style-loader for rendering CSS files, preset-react, stage-0 for asynchronous functions, and many more, depending on the stuff you have used. I am providing the code containing almost everything.

I will explain it in detail in my next article building a webpack from react app

webpack.config.js

https://medium.com/media/e6690392955177a7828180de540bfdc6/href

package.json (partial)

https://medium.com/media/a25f86e81e2f915c7fd9637f87c0a132/href

Now we need to register babel and some preset and then call server.js. So create index.js in root directory

index.js

https://medium.com/media/27752440a09c0a3a00722a950ce50bda/href

Now add a new script in package.json

"ssr": "node index.js"

That’s it…

npm run build
npm run ssr

Now you can run these commands and view the page source; you can find the whole code rendered. You can also view the network tab to find the first request done by the local host to render the app.

If your website contains many routes and components, then definitely you will find the difference and feel how smooth and fast your app becomes 🤩🤩🤩

reactjs
pwa
server-side-rendering
seo
project image

Dockerizing React App

Make your app more awesome. 😎😎😎

Why docker?

Docker is an open-source containerization platform. Docker enables developers to package applications into containers — standardized executable components that combine application source code with all the operating system (OS) libraries and dependencies required to run the code in any environment.

To make your react app ready for hosting, we must dockerize it.

Dockerizing React App

Make your app more awesome. 😎😎😎

Why docker?

Docker is an open-source containerization platform. Docker enables developers to package applications into containers — standardized executable components that combine application source code with all the operating system (OS) libraries and dependencies required to run the code in any environment.

To make your react app ready for hosting, we must dockerize it.

https://medium.com/media/e4da399b86da506e43e111c2571695c8/href

9. Expose port 80

10. To run nginx within the container without halting, we should use daemon off configuration directive described in official docs.

https://medium.com/media/224c9f67f1ba070b9c18863a8ca08531/href

Finally, create a docker image from the docker file using the following commands:

docker image build -t <username>/<image-name> .
docker push <username>/<image-name>

To test it before pushing, use :

docker container run -d -p 8080:80 <username>/<image-name>
I highly recommend to use eslint vs-code extension while developing because even if there is a simple error your page won’t show up and it will be very difficult to know if it’s a problem with dockerizing or application code.

One can use docker-compose also for speeding up the testing and building. image simultaneously in a better way

https://medium.com/media/12c6c74ce49b63141c1c8dc84fba8e1c/href

Create a docker-compose.yml file and run the following commands:

docker-compose up

If you have come this far, you definitely liked the article. Please do consider applauding the article and following me for more such articles.

docker
reactjs
project image

Enable Web Push Notifications for your React app

Using Firebase cloud messaging

First convert your React app to pwa

I have explained this in detail in my previous article here

Steps involved :

  1. Create a Firebase project
  2. Enable cloud messaging
  3. Initialize Firebase in our React app
  4. Enable cloud messaging
The way firebase cloud messaging works is, it creates tokens for every user which needs to be stored in database(mongo or firebase itself). Using an api provided by firebase we send notifications as json data to all the users or selected users

Create Firebase project

Click here to open Firebase console and login with your credentials

  1. Create a new project and select web app
  2. Go to settings→project settings→general
  3. Get the Firebase SDK snippet

Now initialize the Firebase app using sdk

Go to index.html and paste the cdns shown in the code snippet below to initialize Firebase

<script src="https://www.gstatic.com/firebasejs/7.15.1/firebase-app.js"></script>
<script src="https://www.gstatic.com/firebasejs/7.15.1/firebase-messaging.js"></script>
var firebaseConfig = {
apiKey: ".............",
authDomain: "..............",
databaseURL: "............",
projectId: "..........",
storageBucket: "...........",
messagingSenderId: "............",
appId: "....................",
measurementId: ".........",
};
firebase.initializeApp(firebaseConfig);

First we create a messaging object and get token from each user. For this we need a public vapid key which can be found on Firebase console

settings→cloud messaging→web configuration

create one if there is no key pair or else copy it and add it in the public vapid key in the code shown below

function showToken(a) {
console.log(a);
}
const messaging = firebase.messaging();

messaging.usePublicVapidKey("paste the key pair here");

messaging.requestPermission().then(() => {
console.log("granted");
});

messaging
.getToken()
.then((currentToken) => {
if (currentToken) {
console.log(currentToken);
} else {
// Show permission request.
console.log(
"No Instance ID token available"
);
// Show permission UI.
updateUIForPushPermissionRequired();
setTokenSentToServer(false);
}
})
.catch((err) => {
console.log("An error occurred while retrieving token. ", err);
showToken("Error retrieving Instance ID token. ", err);
//setTokenSentToServer(false);
});

In the function showToken add the token to the database of your choice instead of consoling it

Create a new Notification object on onmessage event

messaging.onMessage((payload) => {
var obj = JSON.parse(payload.data.notification);
var notification = new Notification(obj.title, {
icon: obj.icon,
body: obj.body,
});
});

Now go to serviceWorker.js and display the notification

  1. First import the scripts in sw.js also and initialize the Firebase app so that you can access Firebase in sw file
  2. One additional thing you need to add in your Firebase sdk is messagingSenderId which you can find

settings→cloud messaging→project credentials→sender id

importScripts("https://www.gstatic.com/firebasejs/7.15.1/firebase-app.js");
importScripts(
"https://www.gstatic.com/firebasejs/7.15.1/firebase-messaging.js"
);
firebase.initializeApp({
messagingSenderId: ".........",
apiKey: ".............",
authDomain: "..............",
databaseURL: "............",
projectId: "..........",
storageBucket: "...........",
messagingSenderId: "............",
appId: "....................",
measurementId: ".........",
});
const messaging = firebase.messaging();
messaging.setBackgroundMessageHandler(function (payload) {
console.log(
"[firebase-messaging-sw.js] Received background message ",
payload
);
var obj = JSON.parse(payload.data.notification);
var ntitle = obj.title;
var noptions = {
body: obj.body,
icon: obj.icon,
};
return self.registration.showNotification(ntitle, noptions);
});

Yay !! Every thing is set now. You just need to go to Firebase

console→cloud messaging→send new message

You need to get the fcm token from the database where you stored the tokens using the function showToken

ON THE WHOLE

index.html

<script src="https://www.gstatic.com/firebasejs/7.15.1/firebase-app.js"></script>
<script src="https://www.gstatic.com/firebasejs/7.15.1/firebase-messaging.js"></script>
<script>
function showToken(a) {
console.log(a);
}
var firebaseConfig = {
apiKey: ".............",
authDomain: "..............",
databaseURL: "............",
projectId: "..........",
storageBucket: "...........",
messagingSenderId: "............",
appId: "....................",
measurementId: ".........",
};
// Initialize Firebase
firebase.initializeApp(firebaseConfig);
const messaging = firebase.messaging();
messaging.usePublicVapidKey(
".............."
);
messaging.requestPermission().then(() => {
console.log("granted");
});
messaging
.getToken()
.then((currentToken) => {
if (currentToken) {
console.log(currentToken);
} else {
// Show permission request.
console.log(
"No Instance ID token available. Request permission to generate one."
);
// Show permission UI.
updateUIForPushPermissionRequired();
setTokenSentToServer(false);
}
})
.catch((err) => {
console.log("An error occurred while retrieving token. ", err);
showToken("Error retrieving Instance ID token. ", err);
//setTokenSentToServer(false);
});
messaging.onMessage((payload) => {
var obj = JSON.parse(payload.data.notification);
var notification = new Notification(obj.title, {
icon: obj.icon,
body: obj.body,
});
});
</script>
<script>
// Your web app's Firebase configuration
if ("serviceWorker" in navigator) {
window.addEventListener("load", () => {
navigator.serviceWorker
.register("./firebase-messaging-sw.js")
.then((reg) => console.log("Success: ", reg.scope))
.catch((err) => console.log(err));
});
}
</script>

firebase-messaging-sw.js

const CACHE_NAME = "version-1";
const urlsToCache = ["index.html", "offline.html"];
const self = this;
//install a service worker
self.addEventListener("install", (event) => {
event.waitUntil(
caches.open(CACHE_NAME).then((cache) => {
console.log("openend cache");
return cache.addAll(urlsToCache);
})
);
});
//listen for request
self.addEventListener("fetch", (event) => {
event.respondWith(
caches.match(event.request).then(() => {
return fetch(event.request).catch(() => caches.match("offline.html"));
})
);
});
//activate the service worker
self.addEventListener("activate", (event) => {
const cacheWhitelist = [];
cacheWhitelist.push(CACHE_NAME);
event.waitUntil(
caches.keys().then((cacheNames) => {
return Promise.all(
cacheNames.map((cacheName) => {
if (!cacheWhitelist.includes(cacheName))
return caches.delete(cacheName);
})
);
})
);
});
importScripts("https://www.gstatic.com/firebasejs/7.15.1/firebase-app.js");
importScripts(
"https://www.gstatic.com/firebasejs/7.15.1/firebase-messaging.js"
);
firebase.initializeApp({
messagingSenderId: "...........",
apiKey: ".............",
authDomain: "..............",
databaseURL: "............",
projectId: "..........",
storageBucket: "...........",
messagingSenderId: "............",
appId: "....................",
measurementId: ".........",
});
const messaging = firebase.messaging();
messaging.setBackgroundMessageHandler(function (payload) {
console.log(
"[firebase-messaging-sw.js] Received background message ",
payload
);
var obj = JSON.parse(payload.data.notification);
var ntitle = obj.title;
var noptions = {
body: obj.body,
icon: obj.icon,
};
return self.registration.showNotification(ntitle, noptions);
});

In my next article I will brief you how to send push notifications using fcm(Firebase cloud messaging) API in bulk rather than just using the GUI

Stay tuned 😉

reactjs
web-push-notifications
pwa
firebasecloudmessaging
firebase
project image

Converting your React app to PWA in just 5 minutes

What is a Progressive web app?

Web applications can reach anyone, anywhere, on any device with a single codebase. Native applications, are known for being incredibly rich and reliable. Now PWA is the best of two worlds. Progressive Web Apps (PWA) are built and enhanced with modern APIs to deliver native-like capabilities, reliability, and install ability while reaching anyone, anywhere, on any device with a single codebase.

Why PWA ?

Converting a web app to PWA gives it the power of native app experience. Now what do we actually mean by native app experience …?

  1. Offline work mode
  2. Enables Push notifications
  3. Full responsiveness and browser compatibility
  4. Self-updates
  5. Connectivity independence

How to convert a web app to pwa ?

It’s as simple as that. First we need to create manifest.json, which is just like a configuration file for your web app.

manifest.json

{
"short_name": "...",
"name": ".....",
"icons": [
{
"src": "logo64.png",
"sizes": "64x64 32x32 24x24 16x16",
"type": "image/x-icon"
},
{
"src": "logo192.png",
"type": "image/png",
"sizes": "192x192"
},
{
"src": "logo512.png",
"type": "image/png",
"sizes": "512x512"
}
],
"start_url": ".",
"display": "standalone",
"theme_color": "#17202a",
"background_color": "#17202a"
}

Here theme color and background color are the color which appears as flash on opening the web app. The standalone display gives a native app look, it opens in full screen like an app. These are some the most important properties of manifest file.

The most important part of a pwa is a serviceWorker

Now what is a serviceWorker !!

A service worker is a background worker that acts as a programmable proxy, allowing us to control what happens on a request-by-request basis.

This basically means that a sw is a script (A JavaScript file) that runs in background and assists in offline first web application development.

Basic structure of pwa

ServiceWorker is the intermediate between network and application. A sw need to be registered for a web application in order to function offline and enable push notifications. It converts the data into cache so that it need not reload the whole data on every visit. The offline mode is also enabled in such manner.

Registering a serviceWorker

Add this as a script tag in index.html and create a file serviceWorker.js

<script>
if ("serviceWorker" in navigator) {
window.addEventListener("load", () => {
navigator.serviceWorker
.register("./serviceWorker.js")
.then((reg) => console.log("Success: ", reg.scope))
.catch((err) => console.log(err));
});
}
</script>

Here we are checking if the browser supports serviceworker and if yes we register a serviceWorker file onload. We can follow the same method for any type of application react, angular, basic HTML etc;

serviceWorker.js

We create a cache name and an array containing files to be added in cache. We are doing this so that it doesn’t reload the logos and stuff every time one visits a page. Service worker contains promises and all asynchronous stuff. This the reason why it looks overwhelming sometimes

Installing a Serviceworker

const CACHE_NAME = "version-1";
const urlsToCache = ["index.html", "offline.html"];
const self = this;
self.addEventListener("install", (event) => {
event.waitUntil(
caches.open(CACHE_NAME).then((cache) => {
console.log("openend cache");
return cache.addAll(urlsToCache);
})
);
});

Listening for requests

sw doesn’t have any direct connection with the contents of web app. It’s more like an API. So that we need to listen to requests coming from browser

self.addEventListener("fetch", (event) => {
event.respondWith(
caches.match(event.request).then(() => {
return fetch(event.request).catch(() => caches.match("offline.html"));
})
);
});

Activate a service worker

Every time the page reloads a new cache is created. Now the previous cache becomes useless. Hence, we need to delete the previous cache and keep the updated cache only. This can be done by adding the cache names to an array named whitelistedcache and check every time if the cache isn’t in whitelist then delete it.

self.addEventListener("activate", (event) => {
const cacheWhitelist = [];
cacheWhitelist.push(CACHE_NAME);
event.waitUntil(
caches.keys().then((cacheNames) => {
return Promise.all(
cacheNames.map((cacheName) => {
if (!cacheWhitelist.includes(cacheName))
return caches.delete(cacheName);
})
);
})
);
});
It can be done in the same way for react or any other framework. Create-react-app includes service worker.js in src file. We can render it as script through index.js or else do it in the same way by creating serviceWorker.js in public folder.

ON THE WHOLE CONVERTING REACT TO PWA

index.html

//main html file

<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<link rel="icon" href="%PUBLIC_URL%/logo64.png" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<meta name="theme-color" content="#000000" />
<meta
name="description"
content="Web site created using create-react-app"
/>
<link rel="apple-touch-icon" href="%PUBLIC_URL%/logo192.png" />
<link rel="manifest" href="%PUBLIC_URL%/manifest.json" />
<title>Document</title>
</head>
<body>
<noscript>You need to enable JavaScript to run this app.</noscript>
<div id="root"></div>

<script>
if ("serviceWorker" in navigator) {
window.addEventListener("load", () => {
navigator.serviceWorker
.register(".serviceWorker.js.js")
.then((reg) => console.log("Success: ", reg.scope))
.catch((err) => console.log(err));
});
}
</script>
</body>
</html>

serviceWorker.js (in public folder itself)

const CACHE_NAME = "version-1";
const urlsToCache = ["index.html", "offline.html"];
const self = this;
//install a service worker
self.addEventListener("install", (event) => {
event.waitUntil(
caches.open(CACHE_NAME).then((cache) => {
console.log("openend cache");
return cache.addAll(urlsToCache);
})
);
});
//listen for request
self.addEventListener("fetch", (event) => {
event.respondWith(
caches.match(event.request).then(() => {
return fetch(event.request).catch(() => caches.match("offline.html"));
})
);
});
//activate the service worker
self.addEventListener("activate", (event) => {
const cacheWhitelist = [];
cacheWhitelist.push(CACHE_NAME);
event.waitUntil(
caches.keys().then((cacheNames) => {
return Promise.all(
cacheNames.map((cacheName) => {
if (!cacheWhitelist.includes(cacheName))
return caches.delete(cacheName);
})
);
})
);
});

Thanks for your valuable time and hope you succeed in creating a progressive react app :)

pwa
service-worker
reactjs
manifest
progressive-web-app
project image

Rich Text Editor In Nextjs

Setup rich text editor for nextjs app in 3 steps

Direct usage of text editor npm packages available like in Reactjs will not work.

Reset Tailwind css
By default, Next.js pre-renders every page. This means that Next.js generates HTML for each page in advance, instead of having it all done by client-side JavaScript. Pre-rendering can result in better performance and SEO.

Challenges

  1. During this pre-rendering window object isn’t available. Thats the reason it shows the error.
  2. The global styles and tailwind styles will get applied to the text editors imported and disrupts the whole css of the editor.
Before we proceed to the solution, here i will be using CKEditor as the rich text editor.

Solution

Resolving the window issue using useRef

https://medium.com/media/1dd7bdb057eb3abf937420c594d91ebf/href

Resolving the style collision issue

  1. First create unset.css file
https://medium.com/media/83dfe1daa582c0062cd28ca88dd817e2/href

2. Import the unset.css into global.css file of tailwind where you import @layer, @components

global.css

@tailwind base;
@tailwind components;
@tailwind utilities;
@import "./unset.css";

3. Add the below classnames on the div wraping CKEditor

<div className="unset text-black mb-5">
<CKEditor
editor={ClassicEditor}
data={text}
onInit={(editor) => {
console.log("Editor is ready to use!", editor);
}}
onChange={(event, editor) => {
const data = editor.getData();
setText(data);
}}
/>
</div>

If you have any issues let me know in the commnets, do like the article if it helped.

Thankyou😀 and have a nice day!
resettailwindcss
tailwind-css
rich-text-editor
nextjs
useref