NAV
shell ruby

Introduction

This is the documentation repository for the CappaHealth Project. It contains code samples and API calls. It is meant to supplement the individual project readmes.

Requirements

In order to build the Ruby on Rails Projects you will need:

Setup Instructions

Run from inside the projects directory

git checkout git@git.cappahealth.com:cappa/dev-white-label.git
rbenv version # should be 3.2.2 set by the project's .ruby-version
bundle install
cp .env .env.development.local # edit the variables
bundle exec rails db:create
bundle exec rails db:migrate
bundle exec rails db:seed
cp .env .env.test.local # edit the variables
RAILS_ENV=test bundle exec rake db:create
bundle exec rake # tests should pass

These instructions were written based on a Linux box, Mac boxen should work with minimal changes using brew.

  1. Check out the repository
  2. Verify ruby version is 3.2.2 and is being used
  3. Install the bundle
  4. Add Varibles to .env.developmnent.local
  5. Create the development database
  6. Migrate the databse
  7. Seed the database
  8. Add Varibles to .env.test.local
  9. Create the test database
  10. Run rake

Odd Settings

Because of the age of the project and some of the gems in use you need to set an odd SQL MODE in MySQL. Set the following in /etc/mysql/my.cnf or where appropriate for you distro.

sql_mode=STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO

Passwords and Example Accounts

AWS Accounts

Here are the links for signing in to the various AWS accounts. User name and passwords will have to be acquired outside this document.

Account Link
Cappa mydietitian
Alaska 392351531610
Connecticut 392351531610
Arkansas 122545262452
Kentucky 907025317511
Arkansas Health Equity 752813828258
Cappa Native American 678167527493
Great Plains Real 172577662709
HALT New 430094685838
Hawaii 490153728405
Holy Cross New 148776834083
Indiana 948615882804
Iowa 686251874596
Kansas 241061327176
Louisiana 962024016837
Maryland 487822746577
Mississippi 492731062757
National Kidney Foundation 159460906062
Nebraska 670858685952
North Dakota 142960016917
North Dakota 360 308228802679
Anthem 575052634515
Pennsylvania 067310827576

Testing Credentials

Most platforms have testing credentials that allow access to the system in various ways for testing.

User Password
admin@example.com 12341234
state_admin@example.com 12341234
provider_admin@example.com 12341234
coach@example.com 12341234
user@example.com 12341234

Production Credentials

Production Credentials can be found at this Google Sheet. Anytime these passwords are updated, the sheet needs to be updated as well. The mobile App markets use these passwords to test our mobile applications.

Environment Variables

The majority of the difference between the white labels is controlled by Environment Variables. This is a list and a best effort explanation of those variables.

WHITE_LABEL_NAME=The name of the White Label WHITE_LABEL_KEY=The short name or key, used in automation and scripts DB_DATABASE=database DB_HOST=database host (127.0.0.1 not localhost) DB_USERNAME=database user name DB_PASSWORD=database password BRAND_URL=The URL the white label is on including protocol BRAND_HOST=The FQDN of the white labels server, used in deployment, among others BRAND_SHORT_NAME=The short name of the White Label BRAND_PHONE=The Phone Number BRAND_EMAIL_SUPPORT=support email address BRAND_EMAIL_SENDER=Who sends emails ANDROID_DPP_URL=Play Store URL ANDROID_360_URL=Play Store URL IOS_DPP_URL=Apple Store URL IOS_360_URL=Apple Store URL RAILS_ENV=production SECRET_KEY_BASE=for encryption and cookies AWS_STORAGE_REGION=S3 Region for storage AWS_STORAGE_ACCESS_KEY_ID=Key for Storage AWS_STORAGE_SECRET_ACCESS_KEY=Secret for storage AWS_STORAGE_BUCKET=bucket for storage AWS_PROFILE=white-label AWS_REGION=us-east-1 AWS_ECR_URL=468002307566.dkr.ecr.us-east-1.amazonaws.com AWS_ACCESS_KEY_ID=key for deployments AWS_SECRET_ACCESS_KEY=secrets for storage CLOUDFRONT_SUBDOMAIN=cloudfront domain name SMTP_ADDRESS=smtp server SMTP_PORT=587 SMTP_DOMAIN=the send domain for emails SMTP_USER=login for smtp SMTP_PASSWORD=password for smtp SMTP_AUTH=plain

A note on related Projects

This section will need to be expanded to hold info about CMR and Engine and whitelable

Running Whitelabels in a dev enviroment

If you are working on branding or other white label specific logic/features it can be helpful to run the white label environment on your local dev machine. There are two main ways to do this, each with their own use cases

The fast way, good for testing css and branding changes

The fastest way to test simple things like branding changes is to overload the WHITE_LABEL_KEY variable with the white label you want to test. You can do this by directly editing the .env.development.local file or by setting the env var before launching the command. For example WHITE_LABEL_KEY=ky360 bundle exec rails s

The more complete way, good for testing deeper sets of logic

A bit harder to you but if you need a more complete white label environment you can use dotenv to directly load the environment. However you will still need to load the .env.development file too, so that you can do things like have access to the dev database. A command like this example should do the trick dotenv -o -f env/active/.env.ky360,.env.development,.env.development.local bundle exec rails s

White Label Deployment

When deploying a White label for the first time, this process is intended to help keep track of what needs to be done. Several "teams" have to come together to deploy Android, iOS, and web apps, along with server setup and deployments. We also need a lot of information from the Cappa team, which in turn needs information from the end clients. While the process is not super complicated, it is very interconnected.

What we need from Cappa (WIP)

We do not need all of this upfront, but some things like names, URLs and such are required at certain points. Some of this has to be gathered before we can even start. Others we can collect as we get closer to launch. Keep in mind that in collecting this information in real time, we may have to "adjust" previous steps as information is updated. However somethings may not be able to be adjusted. For example, once store pages have been generated for the mobile platforms, it is near impossible to adjust them.

  1. Application Key/ID - We can help with this, but this is the "short name" for the application. For consistency, this should not change. Example: ky360. This key is used internally, for things like bucket names, git branches, etc.

  2. Application FQDN - This is needed before we can deploy anything. Where possible we should refer to this FQDN instead of IP addresses. Example: ky360.org

  3. Some theming and marketing materials

  4. White Labels full name (Kentucky Health and Prevention)

  5. White label short name (Kentucky H&P)

  6. Creatives

    • Logo 1024x1024 Transparent SVG (or PNG as a last resort)
    • Color scheme (primary and secondary)
    • Hero Image
    • Other Landing page images
    • Application Icon (512 x 512) SVG (png with transparency as a last resort)
    • Application Banner (500 x 250) SVG (preferred) or PNG with transparency
    • Google Play Feature Graphic (optional, can use banner instead) (1024 x 500 SVG or PNG)

Accounts and things that we should setup (WIP)

  1. Setup a new server in EC2 by using the AMI. (currently phase1b) Make sure the server is named phase1-#{key}
  2. Create a Hosted zone in Route53, and preserve any records necessary.
  3. Create a domain in Mailgun, and setup DNS as needed. Copy credentials to the .env file
  4. Create a bucket (#{key}-production) in S3 and copy credentials into the .env file.

S3 Bucket Configuration

CORS Configuration

[
    {
        "AllowedHeaders": [
            "*"
        ],
        "AllowedMethods": [
            "GET"
        ],
        "AllowedOrigins": [
            "https://northdakota360.org/"
        ],
        "ExposeHeaders": [
            "Access-Control-Allow-Origin"
        ]
    }
]

First time Setup and Deploy (WIP)

Deployment should be done via the CD process and the rake deploy:run command. A docker image is built every time the main branch of the main project is updated. However, the first time, the server doesn't exist and a few extra things need to happen.

  1. Make a server
  2. Assign DNS entries to the FQDN
  3. Create the /env/active/.env-#{key} file
  4. Create app/assets/images/#{key}
  5. Create app/views/overloads/#{key}
  6. Modify config/ssh_config add entries for the new server
  7. Create bin/services/rails.service.#{key}
  8. Run rake deploy:all this will redeploy all servers including the new one

Getting an SSL Certificate

In order to get an SSL certificate for a server, the server's FQDN must be setup and propagated. This will create a bit of a chicken and egg scenario if your trying to skip this step. You can however get a cert for the ec2 public DNS entries. Please take note that this links the key and chain to a more generic place so that it is in the same location on all servers.

export FQDN='example.com'
sudo dnf install python3 augeas-libs
sudo python3 -m venv /opt/certbot/
sudo /opt/certbot/bin/pip install --upgrade pip
sudo /opt/certbot/bin/pip install certbot certbot-nginx
sudo ln -s /opt/certbot/bin/certbot /usr/bin/certbot
sudo service nginx stop
sudo certbot certonly --standalone --debug -d $FQDN
sudo ln -s /etc/letsencrypt/live/$FQDN/fullchain.pem /etc/nginx/fullchain.pem
sudo ln -s /etc/letsencrypt/live/$FQDN/privkey.pem /etc/nginx/privkey.pem
echo "0 0,12 * * * root /opt/certbot/bin/python -c 'import random; import time; time.sleep(random.random() * 3600)' && sudo certbot renew -q" | sudo tee -a /etc/crontab > /dev/null

Transferring data from legacy servers

Because the legacy servers are out of sync with current production servers getting data from them can be slightly tricky. These command will help.

First on the new host:

sudo service rails stop
docker run -it --rm --name=rails_${WHITE_LABEL_KEY} --network=host --env-file=/home/deploy/dev-white-label/.env.${WHITE_LABEL_KEY} --mount type=bind,src=/home/deploy/log,dst=/app/log ${AWS_ECR_URL}/${AWS_PROFILE}:latest sh
# inside docker
DISABLE_DATABASE_ENVIRONMENT_CHECK=1 rake db:drop
DISABLE_DATABASE_ENVIRONMENT_CHECK=1 rake db:create
DISABLE_DATABASE_ENVIRONMENT_CHECK=1 rake db:migrate
exit # back to the host

From your local machine:

# copy the database from the old system to the new
ssh deploy@legacy-server
mysqldump cappahealth_production -u cappahealth_production -p[password] --no-create-info --complete-insert --ignore-table=cappahealth_production.schema_migrations > data.sql
gzip data.sql
exit # back to local machine

scp deploy@4legacy-server:~/data.sql.gz .
scp data.sql.qz deploy@sunhealthdpp.org:~/

# on the new server
ssh new-server
gunzip data.sql.gz
cat data.sql | mysql -u cappahealth_production -p[password] cappahealth_production
sudo service rails start

A note on RESTful

Parts of this project predate RESTful resource calls. Care should be taken when implementing API consumers to try to stick to RESTful conventions, but you should also be aware that some parts of the code are not RESTful at all.

Authentication

In order to authenticate you must do XYZ.

Alerts

GET Alerts

GET /api/v1/alerts

curl http://localhost:3000/api/v1/alerts

HTTP Request

GET /api/v1/alerts

Query Parameters

No Parameters Required

Auth

POST Login

POST /api/vi/login

  curl -X POST http://localhost:3000/api/v1/login -d "email=user@example.com" -d "password=12341234"

Returned JSON

{"access_token":"00676440a4bd5981b74fb70d17c4d10e","expires_at":1209599}

Example of an Authenticated Request

curl http://localhost:3000/api/v1/timelines/ -X GET  -H "Accept: application/json" -H "Authorization: 960222be2cdf3ba92b8e137e61d6028d"

All requests my be authenticated. To login and authenticate you can post to /api/vi/login with the same email and password as normally used to login, and a access token is returned. You should use this token in future requests.

logout

set_push_token

callback

Comments

create

destroy

Courses

index

Drinks

create

update

Engagements

create

Exercises

index

show

create

destroy

Features

index

GET /api/v3/features

  # This call does not need to be authenticated

  curl http://localhost:3000/api/v3/features -X GET -H "Accept: application/json" \
  -H 'Content-Type: application/json'

Response Body

{
  "PUSH_ENABLED": "false",
  "WHITE_LABEL_KEY": "dev1"
}

This will list all features and there current state. Most of this time this will be key value pairs that turn on or off features. However it is possible to pass config values here as well. For example one config value that is always passed is the WHITE_LABEL_KEY. Values will always be strings. That means care needs to be taken to account for the fact that "false" is truthy in some languages.

Lessons

index

show

activity

Meal Items

top

destroy

Messages

index

create

V2 Metrics

index

show

CREATE a new entry in metrics

POST /api/v2/metrics (BloodGlucose)

  curl http://localhost:3000/api/v2/metrics -X POST -H "Accept: application/json" \
    -H "Authorization: $AUTH_TOKEN" -H 'Content-Type: application/json' -d\
    '{
      "metric_type": "BloodGlucose",
      "values": {
        "blood_glucose": "150"
      }
    }'

Request Body

{
  "metric_type": "BloodGlucose",
  "values": {
    "blood_glucose": "150"
  }
}

Will return JSON like

[
  {
    "id":6,
    "created_at":"2022-08-07T18:00:00.000-06:00",
    "name":"User::Metric::Weight",
    "data":{
      "kilograms":90.718474,
      "pounds":200.0
    },
    "event_type":"User::Metric::Weight"
  }
  ...
  {
    "id":14,
    "created_at":"2023-06-07T08:17:42.000-06:00",
    "name":"User::Metric::BloodGlucose",
    "data":
      {
        "blood_glucose":150.0
      },
    "event_type":"User::Metric::BloodGlucose"
  }
]

POST /api/v2/metrics (BloodPressure)

  curl http://localhost:3000/api/v2/metrics -X POST -H "Accept: application/json" \
  -H "Authorization: $AUTH_TOKEN" -H 'Content-Type: application/json' -d\
  '{
    "metric_type": "BloodPressure",
    "values": {
      "diastolic": "102",
      "pulse": "100",
      "systolic": "101"
    }
  }'

Request Body

  {
    "metric_type": "BloodPressure",
    "values": {
      "diastolic": "102",
      "pulse": "100",
      "systolic": "101"
    }
  }

Will return JSON like

[
  {
    "id":6,
    "created_at":
    "2022-08-07T18:00:00.000-06:00",
    "name":"User::Metric::Weight",
    "data":{
      "kilograms":90.718474,
      "pounds":200.0
    },
    "event_type":"User::Metric::Weight"
  }
  ...
  {
    "id":15,
    "created_at":"2023-06-07T08:31:20.000-06:00",
    "name":"User::Metric::BloodPressure",
    "data":{
      "value":{
        "diastolic":"102",
        "pulse":"100",
        "systolic":"101"
      }
    },
    "event_type":"User::Metric::BloodPressure"
  }
]

When using the metrics endpoint the data you provide will determine what kind of metric is stored. Upon success a list of all stored metrics is returned.

When Storing a BloodPressue metric you should pass in:

Key Description Example
metric_type The type of metric BloodPressure
diastolic Integer 102
pulse Integer 100
systolic Integer 70

When storing a BloodGlucose metric you should pass in:

Key Description Example
metric_type The type of metric BloodGlucose
blood_glucose Integer 150

destroy

render_error

metric_compiliation

Notifications

index

GET /api/v3/notifications

  curl http://localhost:3000/api/v3/notifications -X GET -H "Accept: application/json" \
    -H "Authorization: $AUTH_TOKEN" -H 'Content-Type: application/json'

Response Body

[
  {
    "id": 1,
    "user_id": 5,
    "source_id": 251,
    "source_object": "Messaging::Message",
    "body": "This is the body/message",
    "title": "Test Notification",
    "unread": true,
    "created_at": "2023-11-01T14:12:24.617-06:00",
    "updated_at": "2023-11-01T14:12:24.617-06:00",
    "severity": "info",
    "source": {
      "id":251,
      "sender_id": 5,
      "recipient_id":4,
      "body": "Example",
      "created_at": "2023-02-02T00:00:00.000-07:00",
      "updated_at": "2023-02-02T00:00:00.000-07:00",
      "read_at": "2023-02-02T06:00:00.000-07:00",
      "supervisor_id": null,
      "file": null,
      "company_id": 1,
      "client_id": 2
    }
  },
  {
    "id": 2,
    "user_id":5,
    "source_id":1,
    "source_object": "Attachment",
    "body": "This is the body/message",
    "title": "Test Notification",
    "unread": true,
    "created_at": "2023-11-01T14:14:54.075-06:00",
    "updated_at": "2023-11-01T14:14:54.075-06:00",
    "severity": "info",
    "source": {
      "id": 1,
      "lesson_id": 78,
      "created_at": "2023-10-31T15:21:17.636-06:00",
      "updated_at": "2023-10-31T15:21:17.636-06:00",
      "document_id": 1
    }
  }
]

This will list all notifications belonging to a user. There are not params. This may be limited at any time to X number of entries, or only unread entries. The server will do this, and the API consumers do not need to pass in any params.

Notice that the source field contains the object that started the notification. It can be used to help direct users to the correct part of the application. HOWEVER it is critical that it is understood that the source object can and will change structures over time. Using that field for a quick lookup is fine. But if you need the source object to display the source object, it should be queried separately.

show

GET /api/v3/notifications/:id

curl http://localhost:3000/api/v3/notifications/1 -X GET -H "Accept: application/json" \
    -H "Authorization: $AUTH_TOKEN" -H 'Content-Type: application/json'

Response Body

{
  "id": 1,
  "user_id":5,
  "source_id":251,
  "source_object":"Messaging::Message",
  "body":"This is the body/message",
  "title":"Test Notification",
  "unread":true,
  "created_at":"2023-11-01T14:12:24.617-06:00"
  ,"updated_at":"2023-11-01T14:12:24.617-06:00",
  "severity":"info",
  "source": {
    "id":251,
    "sender_id":5,
    "recipient_id":4,
    "body":"Example",
    "created_at":"2023-02-02T00:00:00.000-07:00",
    "updated_at":"2023-02-02T00:00:00.000-07:00",
    "read_at":"2023-02-02T06:00:00.000-07:00",
    "supervisor_id":null,
    "file":null,
    "company_id":1,
    "client_id":2
  }
}

Returns a single notification. See the above notice about the source field. The :id is the id of the notification.

mark_read

GET /api/v3/notifications/1/mark_read

  curl http://localhost:3000/api/v3/notifications/:id/mark_read -X GET -H "Accept: application/json" \
    -H "Authorization: $AUTH_TOKEN" -H 'Content-Type: application/json'

This call is a throw away. It will either return a 202 if the request worked, or 422 if something kept the response from working. There is no data returned. The :id is the id of the notification.

Posts

index

show

create

comment

Profile

Timeline

GET all timelines

GET /api/v1/timelines

curl http://localhost:3000/api/v1/timelines -X GET  -H "Accept: application/json" -H "Authorization: $AUTH_TOKEN"

Returns JSON like

[
  {
    "week":1,
    "weight":null,
    "activity":0,
    "blood_pressure":null,
    "blood_glucose":null,
    "timeline_events":[]
  },
  ...
  {
    "week":46,
    "weight":null,
    "activity":0,
    "blood_pressure":{
      "systolic":"145",
      "diastolic":"78",
      "pulse":"90"
    },
    "blood_glucose":{
      "blood_glucose":77.0
    },
    "timeline_events":[]
  }
]

Will return all timelines. The example is truncated for brevity.

show

Topics

index

show

create

User

show

update

Errors

The Kittn API uses the following error codes:

Error Code Meaning
400 Bad Request -- Your request is invalid.
401 Unauthorized -- Your API key is wrong.
403 Forbidden -- The kitten requested is hidden for administrators only.
404 Not Found -- The specified kitten could not be found.
405 Method Not Allowed -- You tried to access a kitten with an invalid method.
406 Not Acceptable -- You requested a format that isn't json.
410 Gone -- The kitten requested has been removed from our servers.
418 I'm a teapot.
429 Too Many Requests -- You're requesting too many kittens! Slow down!
500 Internal Server Error -- We had a problem with our server. Try again later.
503 Service Unavailable -- We're temporarily offline for maintenance. Please try again later.