The JFrog Log Analytics and Metrics solution using Prometheus consists of three segments,
- Prometheus - the component where metrics data gets ingested
- Loki - the component where log data gets ingested
- Grafana - the component where data visualization is achieved via prebuilt dashboards
-
A Kubernetes Cluster - Amazon EKS / Google GKE / Azure AKS / Docker Desktop / Minikube
- Recommended Kubernetes Version 1.25.2 and above
- For Google GKE, refer GKE Guide
- For Amazon EKS, refer EKS Guide
- For Azure AKS, refer AKS Guide
- For Docker Desktop and Kubernetes, refer Docker Guide
-
kubectl
configured to the Kubernetes cluster- For Installation and usage refer kubectl setup
-
helm
v3- For Installation and usage refer helm setup
-
Versions supported and Tested:
- Artifactory: 7.117.x
- Xray: 3.124.x
- Prometheus: 3.5.x
- Grafana: 12.0.x
- Loki: 3.5.x
Some known limitations we are aware of
- The stack does not install well on a GKE Autopilot due to permissions needed
Important Note: This version replaces all previous implementations. This version is not an in-place upgrade to the existing solution from JFrog but is a full reinstall. Any dashboard customizations done on previous versions will need to be redone.
This guide assumes the implementer is performing new setup. Changes to handle install in an existing setup will be highlighted where applicable.
If Prometheus is already installed and configured, we recommend to have the existing Prometheus release name handy.
If Loki is already installed and configured, we recommend to have its service URL handy.
If Prometheus and Loki are already available you can skip the installation section and proceed to Configuration Section.
Warning
The old docker registry partnership-pts-observability.jfrog.io
, which contains older versions of this integration is now deprecated. We'll keep the existing docker images on this old registry until August 1st, 2024. After that date, this registry will no longer be available. Please helm upgrade
your JFrog kubernetes deployment in order to pull images as specified on the above helm value files, from the new releases-pts-observability-fluentd.jfrog.io
registry. Please do so in order to avoid ImagePullBackOff
errors in your deployment once this registry is gone.
The Prometheus Community kube-prometheus-stack helm chart allows the creation of Prometheus instances and includes Grafana. The Grafana Community grafana helm chart allows the creation of Loki instances and includes Grafana which can link to prometheus.
Once the Pre-Requisites are met, to install Prometheus Kubernetes stack:
- Create the namespaces required for the Kubernetes deployments
- We use the
jfrog
namespace for the JFrog applications - We use the
monitoring
namespace for the observability tools
- We use the
export JFROG_NAMESPACE=jfrog
kubectl create namespace ${JFROG_NAMESPACE}
export OBS_NAMESPACE=monitoring
kubectl create namespace ${OBS_NAMESPACE}
Note: The monitoring
namespace is also used in the Loki configuration in artifactory-values.yaml and xray-values.yaml. If you decide to change it, make sure to update these files (the LOKI_URL
variable).
- Install Prometheus and Grafana
# Add the required Helm repository
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
# Install the kube-prometheus-stack chart
helm upgrade --install prometheus --values helm/prometheus-grafana-values.yaml prometheus-community/kube-prometheus-stack -n ${OBS_NAMESPACE}
# Might need to add --set prometheus.prometheusSpec.maximumStartupDurationSeconds=600 to avoid an error (bug?)
-
For Docker Desktop
Run this additional command to correct the mount path propagation for prometheus node-exporter component.
An error event will be appearing as follows "Error: failed to start container "node-exporter": Error response from daemon: path / is mounted on / but it is not a shared or slave mount"
kubectl patch ds prometheus-prometheus-node-exporter --type json -p '[{"op": "remove", "path" : "/spec/template/spec/containers/0/volumeMounts/2/mountPropagation"}]' -n ${OBS_NAMESPACE}
- Install Loki
# Add the required Helm repository
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update
# Install the Loki chart
helm upgrade --install loki --values helm/loki-values.yaml grafana/loki -n ${OBS_NAMESPACE}
Installing Artifactory using the official Helm Chart
- Before starting the Artifactory installation generate a join and master keys
export JOIN_KEY=$(openssl rand -hex 32)
export MASTER_KEY=$(openssl rand -hex 32)
- Install Artifactory (using the generated join and master keys)
# Install Artifactory
helm upgrade --install artifactory jfrog/artifactory \
--set artifactory.masterKey=${MASTER_KEY} \
--set artifactory.joinKey=${JOIN_KEY} \
--set artifactory.metrics.enabled=true \
-n ${JFROG_NAMESPACE}
💡 Open Metrics is disabled by default in Artifactory. It's enabled by setting artifactory.metrics.enabled=true
.
- Follow the instructions how to get your new Artifactory URL from the helm install output
export SERVICE_IP=$(kubectl get svc --namespace ${JFROG_NAMESPACE} artifactory-artifactory-nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo ${SERVICE_IP}
echo "http://${SERVICE_IP}/"
OR
export SERVICE_IP=$(kubectl get svc --namespace ${JFROG_NAMESPACE} artifactory-artifactory-nginx -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
echo ${SERVICE_IP}
echo "http://${SERVICE_IP}/"
-
Browse to the URL above and login to Artifactory with the default credentials:
admin
/password
- Follow initial setup wizard
- You will need to enter a valid Artifactory license. If needed, get a free trial license from here
-
In the Artifactory UI, go to "Administration" -> "User Management" -> "Access Tokens" and generate an admin access token. Using the generated token, create a Kubernetes generic secret for the token - using one of the following methods
kubectl create secret generic jfrog-admin-token --from-file=token=<path_to_token_file> -n ${JFROG_NAMESPACE}
OR
kubectl create secret generic jfrog-admin-token --from-literal=token=<JFROG_ADMIN_TOKEN> -n ${JFROG_NAMESPACE}
- The PostgreSQL password is required for Artifactory upgrade. Run the following command to get the current PostgreSQL password
export POSTGRES_PASSWORD=$(kubectl get secret -n ${JFROG_NAMESPACE} artifactory-postgresql -o jsonpath="{.data.postgres-password}" | base64 --decode)
echo ${POSTGRES_PASSWORD}
- Upgrade Artifactory with the custom values in helm/artifactory-values.yaml to create additional Kubernetes resources, which are required for the Prometheus service discovery process.
# Upgrade Artifactory
helm upgrade --install artifactory jfrog/artifactory \
--set artifactory.joinKey=${JOIN_KEY} \
--set databaseUpgradeReady=true --set postgresql.auth.password=${POSTGRES_PASSWORD} \
-f helm/artifactory-values.yaml \
-n ${JFROG_NAMESPACE}
This will complete the necessary configuration for Artifactory and expose new service monitors servicemonitor-artifactory
and servicemonitor-observability
to expose metrics to Prometheus
To configure and install Xray with Prometheus metrics being exposed use our file helm/xray-values.yaml
to expose a metrics and new service monitor to Prometheus.
- Generate a master key for the Xray installation:
export XRAY_MASTER_KEY=$(openssl rand -hex 32)
-
Use the same
JOIN_KEY
from the Artifactory installation, in order to connect Xray to Artifactory. You'll also be using thejfrog-admin-token
kubernetes secret, that was created earlier as part of Artifactory installationGetting the Artifactory URL:
export JFROG_URL=$(kubectl get svc -n ${JFROG_NAMESPACE} artifactory-artifactory-nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo "http://${JFROG_URL}"
OR
export JFROG_URL=$(kubectl get svc -n ${JFROG_NAMESPACE} artifactory-artifactory-nginx -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
echo "http://${JFROG_URL}"
Install Xray
helm upgrade --install xray jfrog/xray --set xray.jfrogUrl=http://${JFROG_URL} \
--set xray.masterKey=${XRAY_MASTER_KEY} \
--set xray.joinKey=${JOIN_KEY} \
-f helm/xray-values.yaml \
-n ${JFROG_NAMESPACE}
Use kubectl port-forward
as mentioned below in a separate terminal window
kubectl port-forward service/prometheus-operated 9090:9090 -n ${OBS_NAMESPACE}
Go to the web UI of the Prometheus instance http://localhost:9090 and verify "Status -> Service Discovery", the list shows all the serviceMonitor
s.
Search for servicemonitor-artifactory
and servicemonitor-xray
to confirm they are successfully picked up by Prometheus.
Use kubectl port-forward
as mentioned below in a separate terminal window
kubectl port-forward service/prometheus-grafana 3000:80 -n ${OBS_NAMESPACE}
-
Open your Grafana on a browser at http://localhost:3000. Grafana default credentials are
admin/prom-operator
(set in prometheus-grafana-values.yaml). -
Go to "Data sources" on the sidebar menu
-
Click
Add new data source
- Add your Prometheus as datasources (if not already configured): Set "Prometheus server URL" to
http://prometheus-kube-prometheus-prometheus:9090/
- Add your Loki as datasources: Set "URL" to
http://loki:3100
- Add your Prometheus as datasources (if not already configured): Set "Prometheus server URL" to
-
When adding the
Loki
andPrometheus
datasources, clickSave & Test
button at the bottom to validate connection to services is successful
Example dashboards are included in the grafana directory. These dashboards need to be imported to Grafana. These include:
- Artifactory Application Metrics (Open Metrics) Dashboard Download Here
- Xray Application Metrics (Open Metrics) Dashboard Download Here
-
After downloading the dashboards go to "Dashboards" -> "New" -> "Import"
-
Pick
Upload dashboard JSON file
and upload Artifactory and Xray dashboards files that you downloaded in the previous step
If you have Loki configured as a Grafana datasource, you will see a Logs
link on the sidebar menu. Click it and you should see the Artifactory and Xray services with a snippet of their logs.
Click the Show logs
on any of these services, and you can now see all the service (Artifactory or Xray) logs in Grafana and start searching and filtering through them.