100ms SDK Android Quickstart Guide

Overview

This overview shows the major steps involved in creating a demo project with the 100ms SDK. Each section links to extra detail.

Here are some sample apps demonstrating this.

Simplest implementaion.

Most full-featured implementation.

Jump to a section you're interested in or read top down to get the overview.

  • Show an optional of the user's audio video with the 100ms hmssdk.preview.

Prerequisites

Familiarity with Android Studio and the fundamentals of android apps.

Supported Android API Levels

100ms' Android SDK supports Android API level 21 and higher. It is built for armeabi-v7a, arm64-v8a, x86, and x86_64 architectures.

To join a video call you need an authentication token and a room id. Or a server that will translate the link into them. The 100ms Dashboard is one way to generate these auth tokens. In production your own server will generate these and manage user authentication.

Links created by the dashboard will stop working after 10k minutes of video calls. You'll need to setup your own server after.

For the purposes of this quickstart you can rely on just the 100ms dashboard. Sign up for the 100ms Dashboard here.

From either the dashboard, or your own server once implemented, you need to generate a video call link. Video call links generated by the 100ms Dashboard look like https://myname.app.100ms.live/meeting/correct-horse-battery.

⚙️ For Production

With your own server for authentication and link generation, the format of the link is up to you.

Add SDK dependencies

  • Add the JitPack repository to your root settings.gradle at the end of the respositories closure:

You can open it in Android Studio by double tapping shift and typing settings.gradle.

settings.gradle
dependencyResolutionManagement { repositoriesMode.set(RepositoriesMode.FAIL_ON_PROJECT_REPOS) repositories { google() mavenCentral() jcenter() // Warning: this repository is going to shut down soon
maven { url 'https://jitpack.io' }
} } rootProject.name = "MyVideoCallApp" include ':app'
  • Add the 100ms sdk dependency to your app-level build.gradle .

You'll also need the webrtc dependency for the org.webrtc.SurfaceViewRenderer that's required to display people's videos.

build.gradle
dependencies { // See the version in the jitpack badge above.
implementation 'com.github.100mslive:android-sdk:x.x.x'
implementation 'org.webrtc:google-webrtc:1.0.32006'
}

Login

Request

Here's how to get an auth token with 100ms's demo authentication

  1. Sign up to the dashboard.
  2. Get your video call link. It should look like https://myname.app.100ms.live/meeting/correct-horse-battery
  3. Send an http post request to https://prod-in.100ms.live/hmsapi/get-token
  4. With the header "subdomain" set to myname.app.100ms.live if your link was https://myname.app.100ms.live/meeting/correct-horse-battery
  5. For a link of the type https://myname.app.100ms.live/meeting/correct-horse-battery The body is json with the format {"code": "correct-horse-battery", "user_id":"your-customer-id" } the user_id can be any random string as well and you can create it with UUID.randomUUID().toString(),.

⚙️ For Production

Maybe you won't use links at all. You will need to generate tokens on the backend, and rooms for users. Look up the Token Setup Guide here.

Response

The 100ms server will respond with an auth token like this {"token":"some-token-string"}.

Permissions

Camera, Recording Audio and Internet permissions are required. Add them to your manifest.

AndroidManifest.xml
<uses-permission android:name="android.permission.CAMERA" /> <uses-permission android:name="android.permission.RECORD_AUDIO" /> <uses-permission android:name="android.permission.INTERNET" />

You will also need to request Camera and Record Audio permissions at runtime before you join a call or display a preview. Please follow Android Documentation for runtime permissions.

Instantiate HMSSDK

Instantiate the HMSSDK like this:

val hmsSdk = HMSSDK .Builder(application) .build()

Join a Video Call

To join a video call, call the join method of hmssdk with the config and appropriate listeners.

The main ones to know are:

onJoin - called when the join was successful and you have entered the room.

💡 Audio will be automatically connected, video requires some work on your side.

onPeerUpdate - called when a person joins or leaves the call and when their audio/video mutes/unmutes.

onTrackUpdate - usually when a person joins the call, the listener will first call onPeerUpdate to notify about the join. Subsequently onTrackUpdate will be called with their actual video track.

💡It's essential that this callback is listened to or you may have peers without video.

val config = HMSConfig("user display name", authToken) hmsSdk.join(config, MyHmsUpdateListener()) class MyHmsUpdateListener : HMSUpdateListener { override fun onJoin(room: HMSRoom) {} override fun onTrackUpdate(type: HMSTrackUpdate, track: HMSTrack, peer: HMSPeer) {} override fun onPeerUpdate(type: HMSPeerUpdate, peer: HMSPeer) {} override fun onMessageReceived(message: HMSMessage) {} override fun onRoleChangeRequest(request: HMSRoleChangeRequest) {} override fun onRoomUpdate(type: HMSRoomUpdate, hmsRoom: HMSRoom) {} override fun onError(error: HMSException) {} }

How you know when people join or leave

The join method takes an interface called HMSUpdateListener. It lets you know when peers join and leave the call, mute/unmute their audio and video and lots more.

The HMSUpdateListener has a callback to notify about people joining or leaving. It is onPeerUpdate(type: HMSPeerUpdate, peer: HMSPeer).

💡HMSPeer is object that represents a person in the call.

How to render audio and video

The SDK plays the audio for every person who joins the call. Audio will begin playing when join succeeds. To see the person's video you need to create an instance of org.webrtc.SurfaceViewRenderer. Take it from webrtc library implementation 'org.webrtc:google-webrtc:1.0.32006'.

Showing Videos

A peer represents one person in the video call.

A peer's video track is in hmsPeer.videoTrack. ScreenShares can be found in val screenShareVideoTrack = hmsPeer.auxiliaryTracks.find { it is HMSVideoTrack }. i.e the auxiliary tracks is a list of tracks, one of which can be a ScreenShare if they have chosen to share their screen.

You would want a RecyclerView of participants in the video call. The adapter data should be a list of class that has both the peer and the track to display. Call it a TrackPeerPair.

Your layout xml for a peer video would need to have an org.webrtc.SurfaceViewRenderer

<org.webrtc.SurfaceViewRenderer android:id="@+id/videoSurfaceView" android:layout_width="wrap_content" android:layout_height="wrap_content" app:layout_constraintStart_toStartOf="parent" app:layout_constraintTop_toTopOf="parent" />

Initialize this when it's added to the window. Release it when it's removed. Call hmsPeer.videoTrack?.addSink(surfaceViewRenderer) to start showing videos.

Listening to Updates Effectively

Each time there's a an onJoin, onPeerUpdate, or onTrackUpdate, you can add all the peers from hmsSdk.getPeers() into the adapter. You'd need to map them into TrackPeerPair's.

Where to go from here

Checkout the simple version of the project.

Also a full featured advanced version.

Glossary

  • Room: When you join a particular video call, all the peers said to be in a video call room'
  • Track: Media. Can be the audio track or the video track.
  • Peer: One participant in the video call. Local peers are you, remote peers are others.
  • Broadcast: Chat messages are broadcasts.