React Query is a powerful data-fetching library, has emerged as a game-changer in React development. It simplifies data management, optimizing network requests, and enhancing caching strategies.
Link to the package — https://www.npmjs.com/package/@tanstack/react-query
Is it worth shifting from Redux to react-query??? Lets find out!!!
// Without React Query
fetch('/api/data').then(response => response.json()).then(data => ...);
// With React Query
const { data } = useQuery('data', () => fetch('/api/data')
.then(response => response.json())
);
2. Automatic Caching:
// Without React Query
const cachedData = localStorage.getItem('cachedData');
// With React Query
const { data } = useQuery(['items'], fetchData, { enabled: isConnected });
3. Real-time Updates:
// Without React Query
socket.on('dataUpdate', updatedData => ...);
// With React Query
const { data } = useQuery('data', fetchData);
4. Global State Management:
// Without React Query
const { data, dispatch } = useContext(AppContext);
// With React Query
const { data, mutate } = useQuery(['items'], fetchData);
5. Network Request Optimization:
// Without React Query
const debouncedFetch = debounce(fetchData, 300);
// With React Query
const { data } = useQuery(['items'], fetchData, {
refetchOnWindowFocus: false
}
);
6. Server Pagination & Infinite Loading:
// Without React Query
const loadMore = () => fetchMoreData(pageNumber);
// With React Query
const { data, fetchNextPage } = useInfiniteQuery(['items'], fetchPageData);
7. Error Handling & Retrying:
// Without React Query
fetch('/api/data').catch(error => retry(fetchData, 3, error));
// With React Query
const { data } = useQuery(['items'], fetchData, { retry: 3 });
8. Developer-Friendly DevTools:
To see in action, here is an example of implementing displaying of data with pagination using react query.
Let's assume you have a set of data with pagination and filters. On clicking tabs you change filter, you have a dropdown to select nunber of items to display, you have previous and next buttons.
Implementing this using react-query:
const Component = () => {
const [filterA, setFilterA] = React.useState<boolean>(false);
const [page, setPage] = React.useState<number>(1);
const [pageSize, setPageSize] = React.useState<number>(10);
const GetData= async (filters) => {
try {
const filterString = filters
? "?" + filters.map((filter) => filter.key + "=" + filter.value).join("&")
: "";
const url = "/api/data/" + filterString;
const res = await axios.get(url);
return res.data;
} catch (err) {
throw err;
}
};
const {
data,
status,
error
} = useQuery(["data", filterA, page, pageSize], () =>
GetData([
{
key: "filter_a",
value: filterA,
},
{
key: "page",
value: page,
},
{
key: "page_size",
value: pageSize,
},
])
);
return (
<Tabs>
<TabList>
<Tab
onClick={() => {
setFilterA(false);
setPage(1);
}}
>
Tab 1
</Tab>
<Tab
onClick={() => {
setFilterA(true);
setPage(1);
}}
>
Tab 2
</Tab>
<Flex>
<Button
onClick={() => setPage(page - 1)}
disabled={page === 1}
>
Previous
</Button>
<Text>
{page}
</Text>
<Button
onClick={() => setPage(page + 1)}
disabled={data?.next === null}
>
Next
</Button>
<Select
value={pageSize}
onChange={(e) => setPageSize(Number(e.target.value))}
>
<option value={10}>10</option>
<option value={20}>20</option>
<option value={50}>50</option>
</Select>
</Flex>
</TabList>
<TabPanels>
<TabPanel>
{status === "loading" ? (
<ComponentLoader />
) : (
<DataTable data={data.results} />
)}
</TabPanel>
<TabPanel>
{status === "loading" ? (
<ComponentLoader />
) : (
<DataTable data={data.results} />
)}
</TabPanel>
</TabPanels>
</Tabs>
);
};
export default Component;
Like useEffect has dependencies array, u can treat the array in useQuery similarly, whenever any variable changes in the array, the API gets triggered again. The same info will be used to cache the data and remember the corresponding API for it.
useQuery(["data", a, b, c], () => GetData(a,b,c))
So here whenever the a, b, c variable changes it gets triggered again.
Yes, it’s definitely worth it to shift from Redux to react-query. You would still need Redux if u want to communicate complex local data across the components.
When u have multiple components, and you are toggling display of components on different button clicks in different components which doesn’t have same parents.
But you no longer need to store the API layer data using Redux, since everything gets handled by react-query.
A round of applause, dear readers, for reaching the end — if you enjoyed this article as much as a developer loves a well-commented code, give it a clap to brighten my day! 👏
AWS Lambda is a powerful serverless computing service that allows you to run your code without provisioning or managing servers. When working with Lambda, it’s common to encounter scenarios where you need to read files from various sources, such as Amazon S3 or other storage systems.
In this article, we’ll explore different methods for reading files in AWS Lambda, including reading text files, CSV files, and Parquet files.
The most common way the lambda read the files are on event bridge trigger or s3 trigger or just from a s3 location.
{
"version": "0",
"id": "12345678-1234-1234-1234-123456789012",
"detail-type": "AWS API Call via CloudTrail",
"source": "aws.s3",
"account": "123456789012",
"time": "2023-06-15T12:34:56Z",
"region": "us-west-2",
"resources": [],
"detail": {
"eventVersion": "1.08",
"eventTime": "2023-06-15T12:34:56Z",
"eventSource": "s3.amazonaws.com",
"eventName": "PutObject",
"awsRegion": "us-west-2",
"sourceIPAddress": "192.0.2.0",
"userAgent": "AWS SDK for Python",
"requestParameters": {
"bucketName": "my-s3-bucket",
"key": "path/to/my-file.csv"
},
"responseElements": {
"x-amz-request-id": "ABC123DEF456",
"x-amz-id-2": "a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6q7r8s9t0u1v2w3x4y5z"
},
"s3": {
"s3SchemaVersion": "1.0",
"configurationId": "MyEventConfig",
"bucket": {
"name": "my-s3-bucket",
"ownerIdentity": {
"principalId": "A1B2C3D4E5F6G7H8I9J0"
},
"arn": "arn:aws:s3:::my-s3-bucket"
},
"object": {
"key": "path/to/my-file.csv",
"size": 1024,
"eTag": "1234567890abcdef",
"versionId": "1a2b3c4d5e6f7g8h9i0j1k2l3m4n5o6p7q8r9s0t1u2v3w4x5y6z",
"sequencer": "A1B2C3D4E5F6G7H8I9J0K1L2M3N4O5P6Q7R8S9T0U1V2W3X4Y5Z"
}
},
"eventSource": "s3.amazonaws.com"
}
}
On S3 Trigger:
import boto3
s3_client = boto3.client('s3')
def lambda_handler(event, context):
# Get the bucket and object key from the S3 event
bucket = event['Records'][0]['s3']['bucket']['name']
object_key = event['Records'][0]['s3']['object']['key']
# Retrieve the file from S3
response = s3_client.get_object(Bucket=bucket, Key=object_key)
# Read the file contents as text
file_contents = response['Body'].read().decode('utf-8')
# Process the file contents as needed
# ...
On Event Bridge:
import boto3
s3_client = boto3.client('s3')
def lambda_handler(event, context):
# Extract the S3 bucket and object key from the event
bucket = event['detail']['requestParameters']['bucketName']
object_key = event['detail']['requestParameters']['key']
# Retrieve the file from S3
response = s3_client.get_object(Bucket=bucket, Key=object_key)
# Read the file contents as text
file_contents = response['Body'].read().decode('utf-8')
# Process the file contents as needed
# ...
import boto3
import csv
s3_client = boto3.client('s3')
def lambda_handler(event, context):
# Extract the S3 bucket and object key from the event
bucket = event['detail']['requestParameters']['bucketName']
object_key = event['detail']['requestParameters']['key']
# Retrieve the file from S3
response = s3_client.get_object(Bucket=bucket, Key=object_key)
# Read the file contents as text
file_contents = response['Body'].read().decode('utf-8')
# Parse the CSV data
csv_data = csv.reader(file_contents.splitlines())
# Iterate over each row in the CSV data
for row in csv_data:
# Access the values in each column of the row
column1 = row[0]
column2 = row[1]
# ... process the data as needed
import boto3
import pyarrow.parquet as pq
s3_client = boto3.client('s3')
def lambda_handler(event, context):
# Get the bucket and object key from the S3 event
bucket = event['Records'][0]['s3']['bucket']['name']
object_key = event['Records'][0]['s3']['object']['key']
# Retrieve the Parquet file from S3
response = s3_client.get_object(Bucket=bucket, Key=object_key)
# Read the Parquet file using pyarrow
parquet_file = response['Body'].read()
parquet_table = pq.read_table(parquet_file)
# Access the Parquet table data
# Example: Get a specific column
column_data = parquet_table.column('column_name')
# Process the Parquet data as needed
# ...
I hope this article provides you with a comprehensive understanding of different file reading methods in AWS Lambda and helps you leverage the full potential of serverless computing for your file processing needs.
Happy coding!
Apache Spark is an open-source data processing framework for big data applications. It provides a simple and fast way to process large amounts of data in a distributed environment. In this article, i will show you how to set up Apache Spark in 5 minutes using Docker and IntelliJ.
We will be utilizing the Spark 3.3.0 framework, optimized for Hadoop 3.3, with the use of OpenJDK 8 as our JDK.
Additional steps you may consider:
1. Add the below dependency in pom.xml
<!-- https://mvnrepository.com/artifact/org.apache.spark/spark-core -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.13</artifactId>
<version>3.3.0</version>
</dependency>
2. Create this docker compose file in your root folder: (reference)
NOTE: Create a /tmp/spark-events-local directory in root folder
version: '3'
services:
spark-master:
image: bde2020/spark-master:3.3.0-hadoop3.3
container_name: spark-master
ports:
- "8080:8080"
- "7077:7077"
environment:
- INIT_DAEMON_STEP=setup_spark
spark-worker-1:
image: bde2020/spark-worker:3.3.0-hadoop3.3
container_name: spark-worker-1
depends_on:
- spark-master
ports:
- "8081:8081"
environment:
- "SPARK_MASTER=spark://spark-master:7077"
spark-worker-2:
image: bde2020/spark-worker:3.3.0-hadoop3.3
container_name: spark-worker-2
depends_on:
- spark-master
ports:
- "8082:8081"
environment:
- "SPARK_MASTER=spark://spark-master:7077"
spark-history-server:
image: bde2020/spark-history-server:3.3.0-hadoop3.3
container_name: spark-history-server
depends_on:
- spark-master
ports:
- "18081:18081"
volumes:
- /tmp/spark-events-local:/tmp/spark-events
3. To test the working and setup of your application, you can paste the following code in your Main.java file, which counts the number of words in a text file:
public class Main {
public static void main(String[] args) {
SparkConf conf = new SparkConf().setAppName("SparkTest").setMaster("local");
JavaSparkContext sc = new JavaSparkContext(conf);
JavaRDD<String> rdd = sc.textFile("src/main/resources/test.txt");
long count = rdd.flatMap(line -> Arrays.asList(line.split(" ")).iterator())
.map(word -> word.replaceAll("[^a-zA-Z]", "").toLowerCase())
.filter(word -> !word.isEmpty())
.count();
System.out.println(count);
}
}
4. Run the docker compose file and then run the Main.java file
Congratulations! You have successfully set up a Apache Spark and tested it with a Java application.
2. Now declare static folders in express and endpoints to serve these static files, also initialize the CORS
Example template for index.js:
require("dotenv").config();
const path = require("path");
const express = require("express");
const app = express();
const mongoose = require("mongoose");
var cors = require("cors");
const { default: axios } = require("axios");
app.use(express.urlencoded({ limit: "50mb", extended: true }));
app.use(express.json({ limit: "50mb" }));
app.use(cors());
app.use((req, res, next) => {
res.header("Access-Control-Allow-Origin", "*");
res.header(
"Access-Control-Allow-Headers",
"Origin, X-Requested-With, Content-Type, Accept, Authorization,auth-token"
);
if (req.method === "OPTIONS") {
res.header("Access-Control-Allow-Methods", "PUT, POST, PATCH, DELETE, GET");
return res.status(200).json({});
}
next();
});
const connect = mongoose
.connect(process.env.MONGO_URI, {
useUnifiedTopology: true,
useNewUrlParser: true,
})
.then(() => console.log("Mondo db connected...."))
.catch((err) => console.log(err));
app.use("/api/org", require("./routes/org"));
app.use(express.static(path.join(__dirname, "./build")));
app.get("*", (req, res) => {
res.sendFile(path.resolve(__dirname, "./build/index.html"), function (err) {
if (err) {
res.status(500).send(err + "asasas not wokring");
}
});
});
const port = process.env.PORT || 5000;
app.listen(port, () => console.log(`Listening on port ${port}`));
module.exports = app;
3. Remove build/dist or any other static folders from .gitignore and push it to your GitHub repository.
4. Finally, add vercel.json in root directory
{
"builds": [
{
"src": "./index.js",
"use": "@vercel/node"
}
],
"routes": [{ "src": "/(.*)", "dest": "/" }]
}
5. Rest, everything is the same.
6. Now push the whole thing into your repo and deploy it in vercel.
Don’t forget to add your environmental variables in vercel dashboard.
If you encounter any issues, feel free to comment.
Do it right for the first time itself!!
A medium-api-npm package to read articles from medium and display it in your website and also post articles from your blog website to medium, which most bloggers for dream for
npm adduser
3. Type your username and password and you are ready to start
The script we are going to write always depends on what we actually are going to build.
For starters lets build a small yet significant npm package. People always want to display their medium article in their blog website and also post their articles from blog website to medium. Medium has made it possible through medium API. But they need to know to work with rest API which everyone doesn’t know. So let’s create a npm package which makes it simple.
npm init
This is the place where we write the code available for the user
In my npm we need only Axios which is just a dev dependency
npm i -D axios
We need to get the medium article from user, to do that we need the integration token from medium
First we are going to write the config object and get the user id which is needed to retrieve the data from user’s medium
async function getMediumArticles(options) {
const config = {
headers: {
"Host": "api.medium.com",
"Content-type": "application/json",
"Authorization": `Bearer ${options.auth}`,
"Accept": "application/json",
"Accept-Charset": "utf-8",
},
};
await axios.get("https://api.medium.com/v1/me", config).then((res) => {
userData = res.data.data;
});
await axios
.get(`https://api.medium.com/v1/users/${userData.id}/publications`,config)
.then((res) => {
userPublications= res.data.data;
});
return {userData,userPublications};
};
To post from your blog website to medium also we need that auth token
async function addPost(options){
const config = {
headers: {
"Host": "api.medium.com",
"Content-type": "application/json",
"Authorization": `Bearer ${options.auth}`,
"Accept": "application/json",
"Accept-Charset": "utf-8",
},
};
await axios.get("https://api.medium.com/v1/me", config).then((res) => {
userData = res.data.data;
});
await axios.post(`https://api.medium.com/v1/users/${userData.id}/posts`,{
title: options.title,
contentFormat: "html",
content: options.html,
canonicalUrl: options.canonicalUrl,
tags: [options.tags],
publishStatus: options.publishStatus
},config).then((res)=>console.log(res.data)).catch(err=>console.log(err));
}
We need to export the functions we have made so that users can access them
module.exports={getMediumArticles,addPost}
Type this in your command prompt and your package will be available for npm users
npm publish
Checkout the medium npm package here
Make your React app more awesome. 😎😎😎
JavaScript’s frameworks, like react, brings a first-class user experience and also very developer-friendly. Beginners often build a complete website using starter kits like create-react-app and don’t realize the major issue until they go live.
If you have built a big website with many components and go to view page source, you don’t see any of them in it 🤔🤔 Why??
Make your React app more awesome. 😎😎😎
JavaScript’s frameworks, like react, brings a first-class user experience and also very developer-friendly. Beginners often build a complete website using starter kits like create-react-app and don’t realize the major issue until they go live.
If you have built a big website with many components and go to view page source, you don’t see any of them in it 🤔🤔 Why??
Make your React app more awesome. 😎😎😎
JavaScript’s frameworks, like react, brings a first-class user experience and also very developer-friendly. Beginners often build a complete website using starter kits like create-react-app and don’t realize the major issue until they go live.
If you have built a big website with many components and go to view page source, you don’t see any of them in it 🤔🤔 Why??
You can only see an empty div#root, which is obvious because react works on virtual DOM. The content of your website is actually rendered on the client. This is a huge issue for search engine optimization. 😰
💡 This can be solved by rendering the DOM on the server-side and sending it as a string to the client.
Many frameworks have implemented this for you, and one can use them directly, like Gatsby and Nextjs. However, in this article, I will be explaining how to implement SSR from scratch.
Assuming that you’re using many routes, redux, different styled-components, etc., let’s code accordingly to work for all kinds of react apps.
Create a server.js file in the root directory of your app
Initialize an express app
const express = require("express");
const path = require("path");
const port = process.env.PORT || 8080;
const app = express();
app.use(express.static(path.resolve(__dirname, ".", "build")));
app.get("*", (req, res, next) => {
res.send("hello world");
});
app.listen(port,()=>{
console.log(`App started in port ${port}`);
});
Let’s get back to this after a while…
Some changes need to be done in App.js and index.js
Here we wrap the App component with BrowserRouter on the client-side, not in App.js, because we need to import it in server.js where StaticRouter will be wrapped.
And instead of react-render, we use react-hydrate. Because React documentation says
hydrate() is Same as render() , but is used to hydrate a container whose HTML contents were rendered by ReactDOMServer . React will attempt to attach event listeners to the existing markup. React expects that the rendered content is identical between the server and the client.
Basically, it means that it hydrates the dried string sent from the server by making event listeners and all back alive.
2. App needs to be wrapped with two wrappers, BrowserRouter from ‘react-router-dom’ and StaticRouter from ‘react-router.’ This enables the routes to work client and server-side properly. For this, the client app should need to be wrapped in BrowserRouter, whereas server-side with StaticRouter.
This doesn’t mean that we will be having two App.js files. Everything will be just cleared in a moment
After npm run build, we can find index.html in build folder with <div id=”root”></div> which we read using the fs module and fill it with <App/> rendered as a string, replace it, and send it. We normally wrap the App component with BrowserRouter, but we wrap it with StaticRouter, analogous and stateless on the server-side. It accepts two props: the location we get from req.url from express routes, and the other is context. This, we pass an empty object. Normally context is useful to store information regarding those routes and can be made available using StaticContext prop
When a route is not found, render Notfound.js or any other component add this so that we can access the redirect URL and some information regarding it on the server-side
import React from 'react';
export default ({ staticContext = {} }) => {
staticContext.status = 404;
return <h1>Oops, nothing here!</h1>;
};
Add this in the server route
if (context.url) {
return res.redirect(301, context.url);
}
Now, if you run node server.js, you will get many errors 🤧
Since react is es6 syntax, browsers can’t render it since they use old normal JavaScript. Hence, we need babel and many presents to convert it back to older JavaScript. To do this, first, we need to add a webpack config. It needs many dev dependencies like style-loader for rendering CSS files, preset-react, stage-0 for asynchronous functions, and many more, depending on the stuff you have used. I am providing the code containing almost everything.
I will explain it in detail in my next article building a webpack from react app
Now we need to register babel and some preset and then call server.js. So create index.js in root directory
Now add a new script in package.json
"ssr": "node index.js"
That’s it…
npm run build
npm run ssr
Now you can run these commands and view the page source; you can find the whole code rendered. You can also view the network tab to find the first request done by the local host to render the app.
If your website contains many routes and components, then definitely you will find the difference and feel how smooth and fast your app becomes 🤩🤩🤩
Make your app more awesome. 😎😎😎
Docker is an open-source containerization platform. Docker enables developers to package applications into containers — standardized executable components that combine application source code with all the operating system (OS) libraries and dependencies required to run the code in any environment.
To make your react app ready for hosting, we must dockerize it.
Make your app more awesome. 😎😎😎
Docker is an open-source containerization platform. Docker enables developers to package applications into containers — standardized executable components that combine application source code with all the operating system (OS) libraries and dependencies required to run the code in any environment.
To make your react app ready for hosting, we must dockerize it.
https://medium.com/media/e4da399b86da506e43e111c2571695c8/href9. Expose port 80
10. To run nginx within the container without halting, we should use daemon off configuration directive described in official docs.
https://medium.com/media/224c9f67f1ba070b9c18863a8ca08531/hrefFinally, create a docker image from the docker file using the following commands:
docker image build -t <username>/<image-name> .
docker push <username>/<image-name>
To test it before pushing, use :
docker container run -d -p 8080:80 <username>/<image-name>
I highly recommend to use eslint vs-code extension while developing because even if there is a simple error your page won’t show up and it will be very difficult to know if it’s a problem with dockerizing or application code.
One can use docker-compose also for speeding up the testing and building. image simultaneously in a better way
https://medium.com/media/12c6c74ce49b63141c1c8dc84fba8e1c/hrefCreate a docker-compose.yml file and run the following commands:
docker-compose up
If you have come this far, you definitely liked the article. Please do consider applauding the article and following me for more such articles.
I have explained this in detail in my previous article here
The way firebase cloud messaging works is, it creates tokens for every user which needs to be stored in database(mongo or firebase itself). Using an api provided by firebase we send notifications as json data to all the users or selected users
Click here to open Firebase console and login with your credentials
Go to index.html and paste the cdns shown in the code snippet below to initialize Firebase
<script src="https://www.gstatic.com/firebasejs/7.15.1/firebase-app.js"></script>
<script src="https://www.gstatic.com/firebasejs/7.15.1/firebase-messaging.js"></script>
var firebaseConfig = {
apiKey: ".............",
authDomain: "..............",
databaseURL: "............",
projectId: "..........",
storageBucket: "...........",
messagingSenderId: "............",
appId: "....................",
measurementId: ".........",
};
firebase.initializeApp(firebaseConfig);
First we create a messaging object and get token from each user. For this we need a public vapid key which can be found on Firebase console
settings→cloud messaging→web configuration
create one if there is no key pair or else copy it and add it in the public vapid key in the code shown below
function showToken(a) {
console.log(a);
}
const messaging = firebase.messaging();
messaging.usePublicVapidKey("paste the key pair here");
messaging.requestPermission().then(() => {
console.log("granted");
});
messaging
.getToken()
.then((currentToken) => {
if (currentToken) {
console.log(currentToken);
} else {
// Show permission request.
console.log(
"No Instance ID token available"
);
// Show permission UI.
updateUIForPushPermissionRequired();
setTokenSentToServer(false);
}
})
.catch((err) => {
console.log("An error occurred while retrieving token. ", err);
showToken("Error retrieving Instance ID token. ", err);
//setTokenSentToServer(false);
});
In the function showToken add the token to the database of your choice instead of consoling it
messaging.onMessage((payload) => {
var obj = JSON.parse(payload.data.notification);
var notification = new Notification(obj.title, {
icon: obj.icon,
body: obj.body,
});
});
settings→cloud messaging→project credentials→sender id
importScripts("https://www.gstatic.com/firebasejs/7.15.1/firebase-app.js");
importScripts(
"https://www.gstatic.com/firebasejs/7.15.1/firebase-messaging.js"
);
firebase.initializeApp({
messagingSenderId: ".........",
apiKey: ".............",
authDomain: "..............",
databaseURL: "............",
projectId: "..........",
storageBucket: "...........",
messagingSenderId: "............",
appId: "....................",
measurementId: ".........",
});
const messaging = firebase.messaging();
messaging.setBackgroundMessageHandler(function (payload) {
console.log(
"[firebase-messaging-sw.js] Received background message ",
payload
);
var obj = JSON.parse(payload.data.notification);
var ntitle = obj.title;
var noptions = {
body: obj.body,
icon: obj.icon,
};
return self.registration.showNotification(ntitle, noptions);
});
Yay !! Every thing is set now. You just need to go to Firebase
console→cloud messaging→send new message
You need to get the fcm token from the database where you stored the tokens using the function showToken
index.html
<script src="https://www.gstatic.com/firebasejs/7.15.1/firebase-app.js"></script>
<script src="https://www.gstatic.com/firebasejs/7.15.1/firebase-messaging.js"></script>
<script>
function showToken(a) {
console.log(a);
}
var firebaseConfig = {
apiKey: ".............",
authDomain: "..............",
databaseURL: "............",
projectId: "..........",
storageBucket: "...........",
messagingSenderId: "............",
appId: "....................",
measurementId: ".........",
};
// Initialize Firebase
firebase.initializeApp(firebaseConfig);
const messaging = firebase.messaging();
messaging.usePublicVapidKey(
".............."
);
messaging.requestPermission().then(() => {
console.log("granted");
});
messaging
.getToken()
.then((currentToken) => {
if (currentToken) {
console.log(currentToken);
} else {
// Show permission request.
console.log(
"No Instance ID token available. Request permission to generate one."
);
// Show permission UI.
updateUIForPushPermissionRequired();
setTokenSentToServer(false);
}
})
.catch((err) => {
console.log("An error occurred while retrieving token. ", err);
showToken("Error retrieving Instance ID token. ", err);
//setTokenSentToServer(false);
});
messaging.onMessage((payload) => {
var obj = JSON.parse(payload.data.notification);
var notification = new Notification(obj.title, {
icon: obj.icon,
body: obj.body,
});
});
</script>
<script>
// Your web app's Firebase configuration
if ("serviceWorker" in navigator) {
window.addEventListener("load", () => {
navigator.serviceWorker
.register("./firebase-messaging-sw.js")
.then((reg) => console.log("Success: ", reg.scope))
.catch((err) => console.log(err));
});
}
</script>
const CACHE_NAME = "version-1";
const urlsToCache = ["index.html", "offline.html"];
const self = this;
//install a service worker
self.addEventListener("install", (event) => {
event.waitUntil(
caches.open(CACHE_NAME).then((cache) => {
console.log("openend cache");
return cache.addAll(urlsToCache);
})
);
});
//listen for request
self.addEventListener("fetch", (event) => {
event.respondWith(
caches.match(event.request).then(() => {
return fetch(event.request).catch(() => caches.match("offline.html"));
})
);
});
//activate the service worker
self.addEventListener("activate", (event) => {
const cacheWhitelist = [];
cacheWhitelist.push(CACHE_NAME);
event.waitUntil(
caches.keys().then((cacheNames) => {
return Promise.all(
cacheNames.map((cacheName) => {
if (!cacheWhitelist.includes(cacheName))
return caches.delete(cacheName);
})
);
})
);
});
importScripts("https://www.gstatic.com/firebasejs/7.15.1/firebase-app.js");
importScripts(
"https://www.gstatic.com/firebasejs/7.15.1/firebase-messaging.js"
);
firebase.initializeApp({
messagingSenderId: "...........",
apiKey: ".............",
authDomain: "..............",
databaseURL: "............",
projectId: "..........",
storageBucket: "...........",
messagingSenderId: "............",
appId: "....................",
measurementId: ".........",
});
const messaging = firebase.messaging();
messaging.setBackgroundMessageHandler(function (payload) {
console.log(
"[firebase-messaging-sw.js] Received background message ",
payload
);
var obj = JSON.parse(payload.data.notification);
var ntitle = obj.title;
var noptions = {
body: obj.body,
icon: obj.icon,
};
return self.registration.showNotification(ntitle, noptions);
});
In my next article I will brief you how to send push notifications using fcm(Firebase cloud messaging) API in bulk rather than just using the GUI
Stay tuned 😉
Web applications can reach anyone, anywhere, on any device with a single codebase. Native applications, are known for being incredibly rich and reliable. Now PWA is the best of two worlds. Progressive Web Apps (PWA) are built and enhanced with modern APIs to deliver native-like capabilities, reliability, and install ability while reaching anyone, anywhere, on any device with a single codebase.
Converting a web app to PWA gives it the power of native app experience. Now what do we actually mean by native app experience …?
It’s as simple as that. First we need to create manifest.json, which is just like a configuration file for your web app.
{
"short_name": "...",
"name": ".....",
"icons": [
{
"src": "logo64.png",
"sizes": "64x64 32x32 24x24 16x16",
"type": "image/x-icon"
},
{
"src": "logo192.png",
"type": "image/png",
"sizes": "192x192"
},
{
"src": "logo512.png",
"type": "image/png",
"sizes": "512x512"
}
],
"start_url": ".",
"display": "standalone",
"theme_color": "#17202a",
"background_color": "#17202a"
}
Here theme color and background color are the color which appears as flash on opening the web app. The standalone display gives a native app look, it opens in full screen like an app. These are some the most important properties of manifest file.
The most important part of a pwa is a serviceWorker
A service worker is a background worker that acts as a programmable proxy, allowing us to control what happens on a request-by-request basis.
This basically means that a sw is a script (A JavaScript file) that runs in background and assists in offline first web application development.
ServiceWorker is the intermediate between network and application. A sw need to be registered for a web application in order to function offline and enable push notifications. It converts the data into cache so that it need not reload the whole data on every visit. The offline mode is also enabled in such manner.
Add this as a script tag in index.html and create a file serviceWorker.js
<script>
if ("serviceWorker" in navigator) {
window.addEventListener("load", () => {
navigator.serviceWorker
.register("./serviceWorker.js")
.then((reg) => console.log("Success: ", reg.scope))
.catch((err) => console.log(err));
});
}
</script>
Here we are checking if the browser supports serviceworker and if yes we register a serviceWorker file onload. We can follow the same method for any type of application react, angular, basic HTML etc;
We create a cache name and an array containing files to be added in cache. We are doing this so that it doesn’t reload the logos and stuff every time one visits a page. Service worker contains promises and all asynchronous stuff. This the reason why it looks overwhelming sometimes
const CACHE_NAME = "version-1";
const urlsToCache = ["index.html", "offline.html"];
const self = this;
self.addEventListener("install", (event) => {
event.waitUntil(
caches.open(CACHE_NAME).then((cache) => {
console.log("openend cache");
return cache.addAll(urlsToCache);
})
);
});
sw doesn’t have any direct connection with the contents of web app. It’s more like an API. So that we need to listen to requests coming from browser
self.addEventListener("fetch", (event) => {
event.respondWith(
caches.match(event.request).then(() => {
return fetch(event.request).catch(() => caches.match("offline.html"));
})
);
});
Every time the page reloads a new cache is created. Now the previous cache becomes useless. Hence, we need to delete the previous cache and keep the updated cache only. This can be done by adding the cache names to an array named whitelistedcache and check every time if the cache isn’t in whitelist then delete it.
self.addEventListener("activate", (event) => {
const cacheWhitelist = [];
cacheWhitelist.push(CACHE_NAME);
event.waitUntil(
caches.keys().then((cacheNames) => {
return Promise.all(
cacheNames.map((cacheName) => {
if (!cacheWhitelist.includes(cacheName))
return caches.delete(cacheName);
})
);
})
);
});
It can be done in the same way for react or any other framework. Create-react-app includes service worker.js in src file. We can render it as script through index.js or else do it in the same way by creating serviceWorker.js in public folder.
//main html file
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8" />
<link rel="icon" href="%PUBLIC_URL%/logo64.png" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<meta name="theme-color" content="#000000" />
<meta
name="description"
content="Web site created using create-react-app"
/>
<link rel="apple-touch-icon" href="%PUBLIC_URL%/logo192.png" />
<link rel="manifest" href="%PUBLIC_URL%/manifest.json" />
<title>Document</title>
</head>
<body>
<noscript>You need to enable JavaScript to run this app.</noscript>
<div id="root"></div>
<script>
if ("serviceWorker" in navigator) {
window.addEventListener("load", () => {
navigator.serviceWorker
.register(".serviceWorker.js.js")
.then((reg) => console.log("Success: ", reg.scope))
.catch((err) => console.log(err));
});
}
</script>
</body>
</html>
serviceWorker.js (in public folder itself)
const CACHE_NAME = "version-1";
const urlsToCache = ["index.html", "offline.html"];
const self = this;
//install a service worker
self.addEventListener("install", (event) => {
event.waitUntil(
caches.open(CACHE_NAME).then((cache) => {
console.log("openend cache");
return cache.addAll(urlsToCache);
})
);
});
//listen for request
self.addEventListener("fetch", (event) => {
event.respondWith(
caches.match(event.request).then(() => {
return fetch(event.request).catch(() => caches.match("offline.html"));
})
);
});
//activate the service worker
self.addEventListener("activate", (event) => {
const cacheWhitelist = [];
cacheWhitelist.push(CACHE_NAME);
event.waitUntil(
caches.keys().then((cacheNames) => {
return Promise.all(
cacheNames.map((cacheName) => {
if (!cacheWhitelist.includes(cacheName))
return caches.delete(cacheName);
})
);
})
);
});
Thanks for your valuable time and hope you succeed in creating a progressive react app :)
Setup rich text editor for nextjs app in 3 steps
By default, Next.js pre-renders every page. This means that Next.js generates HTML for each page in advance, instead of having it all done by client-side JavaScript. Pre-rendering can result in better performance and SEO.
Before we proceed to the solution, here i will be using CKEditor as the rich text editor.
2. Import the unset.css into global.css file of tailwind where you import @layer, @components
@tailwind base;
@tailwind components;
@tailwind utilities;
@import "./unset.css";
3. Add the below classnames on the div wraping CKEditor
<div className="unset text-black mb-5">
<CKEditor
editor={ClassicEditor}
data={text}
onInit={(editor) => {
console.log("Editor is ready to use!", editor);
}}
onChange={(event, editor) => {
const data = editor.getData();
setText(data);
}}
/>
</div>
If you have any issues let me know in the commnets, do like the article if it helped.
Thankyou😀 and have a nice day!