Aayush: weblog

IMS and LTE Policy Control for devices of different form-factors.

Posted by Aayush on March 27, 2012


Background:

With the advent of LTE and IMS, Voice and Video over LTE (VoLTE) is fast becoming a reality. Customers are turning to video services rapidly and data consumption is increasing exponentially. 

Moreover, with multi-screen devices such as smart phones, tablets etc being churned out in millions – customers now own atleast 2 smart devices today. Customers also expect their applications to provide them with a uniform customer experience irrespective of the  device form factor. This holds true for all applications, and it will also be a natural expectation from IMS and VoLTE applications.

Policy Control to the Rescue:

In contrast to OTT (Over the Top) internet traffic and OTT video applications, IMS video applications have a slight edge of policy control and enforcement.

As the form factor of the device increases (from a smart phone to a tablet for example), its data consumption requirements also increase due to the bigger screen size. Moreover, if the customer chooses to play HD content, the throughput requirements would further increase accordingly.

Hence, in order to preserve the customer experience of video applications on multiple screens, it is also important that a sufficient data pipe is provided to the application in order for it to perform uniformly. In addition to the data pipe, video playback latency and jitter control also need to be controlled over the air.

This becomes increasingly important, if we wish to deliver Live TV services and VoD services over IMS.

To mitigate this situation for IMS video applications, we can effectively use the IMS and LTE policy control framework.

Solution Architecture:

The solution uses one of the most ‘ancient’ SIP headers defined by RFC 3261 in conjunction with the DIAMETER Rx interface.

The User-Agent header is defined in Section 20.41 of RFC 3261, and this header is used to provide ‘information’ on the user agent originating the SIP request. This header can be used by IMS User Equipment to provide details on the form factor of the device where the IMS client is executing. Moreover, it should also be possible to provide device pixel details (if available from the OS).

For example, on Android Operating System, the following Java code provides the screen display metrics which can be sent to the the IMS core by using the User-Agent SIP header:

DisplayMetrics dMetrics = new DisplayMetrics();
getWindowManager().getDefaultDisplay().getMetrics(dMetrics);
String str = “Display Metrics Are : “
+ dMetrics.widthPixels
+ ” x “
+ dMetrics.heightPixels;

System.out.println(str);

The P-CSCF in the IMS core network can extract the User-Agent header and use the device form factor details on the Rx interface. Based on the device form factor and resolution, the PCRF can enforce appropriate QCIs, UL Bandwidth and DL bandwidth for the specific device in question.

In addition, the P-CSCF also sends the codec information as received in the SDP (Session Description Protocol) to the PCRF. This information coupled with the device form factor and resolution can enable the PCRF to calculate a very accurate measure of the UL and DL bandwidth to enforce. Moreover, this information can also help the LTE network to provide bandwidth boost to premium customers or premium video content.

Discovering the Policy Enabled Architecture:

Policy control is a distinctive edge that the VoLTE architecture provides over traditional OTT video content. The ability of the LTE and the IMS network to accurately calibrate session QoS characteristics is a true differentiator as opposed to best effort video. By leveraging age old SIP headers in conjunction with the PCRF can lead to truely differentiated customer experience.

OTT video provides provide a lot of jitter control, echo cancellation and buffering techniques to enhance customer experience, especially compensating for poor RF conditions or congestion scenarios.

However, none of those techniques can match the realtime QoS enabled architecture of VoLTE, which can guarantee high throughput even in low converage areas of LTE.

This is because LTE radio coverage is not a decisive factor for calculating throughput for customers. Throughput depends on the number of empty resource blocks available in a given eNode-B cell. For a three sector LTE base station, there are 100 resource blocks per sector. This gives a total of 300 resource blocks per base station available for customers.

Throughput is a factor of the number of free resource blocks available for a given subscriber in the LTE cell at that time. Even if the coverage is poor (cell edge conditions), it is possible to provide high throughput to the customer through the policy control architecture.

Operators need to realize the power of the IMS and LTE architecture to truly exploit it and create differentiation in their services.

There are a lot of other hidden nuggets in the combined IMS and LTE network architecture, which I will leave for another discussion and for some other day. Hopefully, engineers from around the world will discover these hidden nuggets and construct innovative policy enabled services for consumers.

Posted in 4G, Carriers, data management, IMS, Java, LTE, OTT, Services | Tagged: , , , , , , | Leave a Comment »

Net Neutrality Simplified – Information super-highway analogy.

Posted by Aayush on December 6, 2011


A few months back, during an informal discussion with a colleague – the concept of net neutrality came along.

During this discussion, a very interesting analogy was made with respect to the information super-highway. I am extending that analogy into a story-board, and which I feel would serve as a helpful tool to understand net neutrality.

First we will review the official FCC broadband policy statement which defines Net Neutrality, and then we will look at the analogy.

Official Regulation in the US:

According to the FCC broadband policy statement, Net Neutrality regulations are defined for both wireless and fixed broadband providers.

Wireless providers need to follow the definition below, while for fixed broadband providers some additional rules apply.

Definition of Net Neutrality (for wireless providers):

“To encourage broadband deployment and preserve and promote the open and interconnected nature of the public Internet, consumers are entitled to:”

  • access the lawful Internet content of their choice.
  • run applications and use services of their choice, subject to the needs of law enforcement.
  • connect their choice of legal devices that do not harm the network.
  • competition among network providers, application and service providers, and content providers.
Additional rules which apply for fixed broadband providers are as follows:

Transparency: Fixed and mobile broadband providers must disclose the network management practices, performance characteristics, and terms and conditions of their broadband services

No blocking: Fixed broadband providers may not block lawful content, applications, services, or non-harmful devices; mobile broadband providers may not block lawful websites, or block applications that compete with their voice or video telephony services.

No unreasonable discrimination: Fixed broadband providers may not unreasonably discriminate in transmitting lawful network traffic.

In our analogy, we will combine the definitions above.

The Information super-highway analogy:

Consider a multi-lane highway which has several toll booths installed at the very start on a toll station. Let us call this the information super highway.

Vehicle owners need to pay a certain amount as toll tax whenever they traverse the toll booth. These vehicle owners can choose which toll booth they queue up at, depending upon their previous quality of experience on that booth.

For example, a certain toll booth may process vehicles very slowly and hence provides an inferior customer experience with respect to another booth..and so on. 

Let us consider these vehicle owners as customers, and let us consider the toll booths as network service providers (carriers).

The toll station has a certain governance structure in place. Let this governance structure be the regulator.

Let the toll tax be the rating/charging plan of the carrier levied on the customer.

The story gets interesting once the customers pass the toll booth after paying their tax.

As a customer, as long as I pay my toll tax I have complete freedom to access the information present on the super-highway. On both sides of the super-highway we have huge digital “marts” – such as the facebooks of the world, and the Googles of the world. Each digital mart has its own nuances, its own services, its own pros and its own cons.

The best part is, that all products and services offered by these marts are “free” for the customers!
In order to attract customers and merchants alike, these marts offer advertising programs and provide benefit points as part of a well structured points program.

Usually, customers park their cars in one of these digital marts to enjoy their products and services freely.

This is where one of the core concepts of net neutrality resonate – that customers are free to consume lawful content of their choice.

However, service providers also have some small, less glamorous road-side marts which offer services to the same customers. Hence, the operator-controlled digital marts are in direct competition to the OTT player controlled digital marts.

Some of these OTT digital marts also provide shop-space to individual digital retail providers. Let these individuals be the “developers” who use the OTT APIs to develop their apps – that run on android, IOS, Facebook widgets etc.

All these ecosystem players are competing against one another.

At this point, another core concept of net neutrality resonates – “competition among network providers, application and service providers, and content providers.”

The problem:

As expected, the carrier controlled digital marts get fewer customers as opposed to the OTT player digital marts. The reasons are several, and we will not get into that analysis here.

As a result, the carrier controlled marts lose out on revenues made and also find it difficult to recover their CAPEX / OPEX which they had invested in building their digital marts, as well as the information super-highway itself !

Moreover, there is constant pressure of more traffic on the information super-highway, and these carriers have to periodically invest in increasing the capacity of the super-highway by adding more lanes, so that they can cater to the ever- increasing demand. This results in more CAPEX outflows by the carrier.

As soon as the new lane is added, it only adds to the woes of the carriers, as more customers throng the OTT digital marts and lead to congestion on the super-highway. A new lane is added with every technological evolution in wireless standards- from 2G, through 3G and now 4G wireless networks – thus offering customers increased bandwidth.

Some operators build cheaper by-lanes to the information super-highway to offload some traffic. These by-lanes are Wi-Fi offload by-lanes. This provides only temporary help in reducing network congestion, but the major problem still remains.

Knee-Jerk Reactions:

In order to reduce congestion, some carriers resort to bandwidth throttling – by employing speed breakers in front of OTT digital marts. This is not a fair thing to do, as service providers  may not unreasonably discriminate in transmitting lawful network traffic. (See rules above).

Some carriers resort to heavy volume based charging strategies if customers are heavy users of OTT services. All these steps are retrograde in my opinion, as you cannot charge a heavier toll tax based on which digital mart the customer intends to visit. The customer is entitled to choose and consume products and services freely on the information super highway.

Probable Solutions:

There are two solutions to this catch-22 situation:

1. Build a policy aware network.

2. Save on servers and infrastructure CAPEX by moving  to a virtualized IaaS cloud. Use these savings to build a bigger and better digital mart to challenge the OTT players.

The first option is immediately feasible with the advent of LTE networks and the Evolved Packet Core (EPC), which define nine QCI levels in the wireless standard. The PCRF node in the EPC is the nerve center for policy rules and QoS.

The second option is a bit dramatic, but can be an option. Some people would argue that the cloud business model is not proven, and that the savings on infrastructure may not be significant. I feel that rather than blindly investing heavily on infrastructure all the time, carriers can evaluate cloud based IaaS solutions and invest more on building their own digital marts to challenge the facebooks and the googles of the world.

This may sound to be an outrageous idea as of today, but you never know if a cool social network platform may click with customers and give the OTT players a run for their money. The operator holds valuable information about his customers, and this information can be used to give a more personalized customer experience in the digital mart ! Put policy, QoS and bandwidth boost in the mix, and these can act as critical differentiations. Furthermore, add reliable HD video communications capabilities to complete the customer experience.

A good customer care system coupled with a reliable policy controlled digital mart can become a competitor to OTT marts, only if the operators think in this direction and give this possibility a chance.

Food for thought ?

Posted in 4G, Apple, Carriers, Facebook, Google, LTE, OTT, policy, QoS | 2 Comments »

Java Concurrency Utilities (Part-02): Rejected Execution Handlers, Thread Factories and Runnable Queues

Posted by Aayush on September 25, 2011


In the first part of Java concurrency utilities, we discussed the Callable interface, Future interface and FutureTask class. The post can be found here.

In this post, we will introduce the Rejected Execution Handler utility of the Java concurrency package.

The RejectedExecutionHandler is an interface, which can be implemented by the application. This interface acts as a “callback” interface and it is called when the thread pool executor is unable to execute a task.

The application can then do some “housekeeping” work – which may include the queuing of this task for future execution.

In this example, the thread pool is switched off on purpose, and the Runnable task fails. The failed runnable task is redirected to the rejected execution handler, which queues it in a Linked Blocking Queue.

There is another thread pool with a custom Thread Factory consisting of Deferred worker threads.

The deferred thread pool has all its worker threads “pre-started”, and they all block on the runnable queue. As the Runnable tasks start failing, they are added to the runnable queue and picked up by the deferred worker threads for execution.

Code Files:

1. RejectedTasksDemo.java – where all the action starts

2. RejectedHandler.java – which implements the RejectedExecutionHandler interface

3. WorkerThread.java – the thread which does all the hard work !

4. DeferredWorker.java – the worker thread which does the clean-up once the Runnable fails after reading from the runnable queue.

5. CustomThreadFactory.java – the thread factory for the deferred thread pool.

Code Snippets:

RejectedTasksDemo Class:

package org.demo.java.rejectedhandlers;

import java.util.Queue;
import java.util.concurrent.Executors;
import java.util.concurrent.LinkedBlockingQueue;
import java.util.concurrent.ThreadPoolExecutor;

/**
 * 
 * @author aayush.bhatnagar
 * 
 * A class demonstrating the rejected execution handler.
 *
 */
public class RejectedTasksDemo 
{

// Thread pool with the rejected executor handler configured
public static ThreadPoolExecutor threadP = (ThreadPoolExecutor) Executors.newFixedThreadPool(3);

// Threads which will block on the runnable queue.
public static ThreadPoolExecutor threadR = (ThreadPoolExecutor) Executors.newFixedThreadPool(3);

// Queue for storing the runnable instances which cannot be executed by threadP
public static Queue<Runnable> runnable_queue = new LinkedBlockingQueue<Runnable>();

 public static void main (String[] args) throws InterruptedException 

 {
 threadR.setThreadFactory(new CustomThreadFactory());
 threadP.setRejectedExecutionHandler(new RejectedHandler());

 // pre start all core threads..
 threadR.prestartAllCoreThreads();

 threadP.submit(new WorkerThread("runnable executed by the thread pool executor.."));
 Thread.sleep(300L);
 // shutdown..
 threadP.shutdownNow();
 // Now the rejected tasks handler comes into the picture..
 threadP.submit(new WorkerThread("runnable which got rejected-1.."));
 threadP.submit(new WorkerThread("runnable which got rejected-2.."));
 threadP.submit(new WorkerThread("runnable which got rejected-3.."));

 // shut down the deferred thread pool:
 threadR.shutdown();

 }

}

RejectedHandler.java


package org.demo.java.rejectedhandlers;

import java.util.concurrent.RejectedExecutionHandler;
import java.util.concurrent.ThreadPoolExecutor;

/**
 * 
 * @author aayush.bhatnagar
 * 
 * The call back handler which is invoked when a task cannot be 
 * executed by the thread pool executor.
 *
 */
public class RejectedHandler implements RejectedExecutionHandler
{

 @Override
 public void rejectedExecution(Runnable task, ThreadPoolExecutor executor) 
 {

 System.out.println("oops..race condition. Somebody turned off the thread pool executor..");
 System.out.println("need to complete the rejected task..");
 // Implementations may even decide to queue the task for deferred execution.
 System.out.println("Adding the rejected task to runnable queue -- "+RejectedTasksDemo.runnable_queue.add(task));

 }

}

WorkerThread.java

package org.demo.java.rejectedhandlers;

/**
 * 
 * @author aayush.bhatnagar
 * 
 * A worker thread.
 *
 */

public class WorkerThread implements Runnable
{
 private String description;

 public WorkerThread(String description)
 {
 this.description = description;
 }

 public WorkerThread() {

 }
 @Override
 public void run() {
 System.out.println("Thread type -- "+description);
 System.out.println("Doing some work....");
 System.out.println("Work done..retiring for the day..\n");

 }
}

DeferredWorker.java


package org.demo.java.rejectedhandlers;

/**
 * 
 * @author aayush.bhatnagar
 * 
 * Thread which blocks on the runnable queue and picks up tasks from it 
 * for deferred execution.
 *
 */
public class DeferredWorker extends Thread
{

 private boolean exit_flag = true;
 public DeferredWorker() 
 {

 }

 @Override
 public void run() {
 System.out.println("Deferred thread started -- "+ this.getName());

 while(exit_flag)
 {
 if(RejectedTasksDemo.runnable_queue.peek()!=null)
 {
 System.out.println("Iterating over the runnable queue == "+this.getName());
 Runnable task = RejectedTasksDemo.runnable_queue.poll();
 if(task!=null)
 {
 task.run();

 if(RejectedTasksDemo.runnable_queue.isEmpty())
 this.exit_flag = false;
 }
 }
 }

 }

}

CustomThreadFactory.java

package org.demo.java.rejectedhandlers;

import java.util.concurrent.ThreadFactory;

/**
 * 
 * @author aayush.bhatnagar
 * 
 * The custom thread creation factory
 *
 */
public class CustomThreadFactory implements ThreadFactory {

 @Override
 public Thread newThread(Runnable task) 
 {
 Thread t = new DeferredWorker();
 t.setName("deferred");
 return t;
 }

}

Posted in Java, Uncategorized | Tagged: , , | Leave a Comment »

Java Concurrency Utilities (Part-01): Callable, Future and FutureTask

Posted by Aayush on September 25, 2011


In this post, I will demonstrate through a very simple program the usage of the Callable, Future and FutureTask utilities. These utilities are present in the java.util.concurrent package.

However, before we come to the actual code – it is important to understand the use-cases behind the need for these utilities.

The use-case for having a Callable interface:

Sometimes, in our applications – we feel the need for our worker threads to have a return value. Doing so with usual threads is not possible, as the run() method does not have a return type.

In such cases, we can create a class which implements the Callable “type” and pass the instance of that class  to the in-built java thread pool.

The class which implements callable, is executed in a thread inside the Executor, and the return value of the “callable” is made available once the thread’s execution completes.

The use-case for Future and FutureTask:

It is often seen, that while designing APIs (being exposed to 3rd party applications), developers may provide API variants which expose a “synchronous” behavior as well as an option of “asynchronous” behavior, as viewed from the 3rd party application.

The processing for each incoming API call may happen in a worker thread, which in turn exchanges data over the network (or does some file I/O etc), and then a return value needs to be presented (think Callable), which has to be sent back to the API caller.

Hence, for the API caller, it seems that the invocation was synchronous. However, under the hood – the API spawns a worker thread, which is executed asynchronously by a thread pool and returns a value at some point in the “future”, when the processing is done.

In such cases, the concepts around Future and Future Task become important and come in handy for developers.

One of the practical usages of Future utilities can be from a protocol stack perspective – where the client sends a message through the stack. The stack sends the message over the network in a request submitter thread, receives a response “in the future” in a response listener thread and then this response is returned back to the caller thread.

The request submitter thread and the response receiver thread can communicate through an “Exchanger” concurrency utility of Java -  (http://download.oracle.com/javase/6/docs/api/java/util/concurrent/Exchanger.html). However, this is a story for another time.

The example below is not that complex ! It only introduces these utilities.

Both these use-cases are illustrated below in the form of code snippets:

Code Snippet – Demonstrating Callable, Future and Future Task:

The main class – where we do all the stuff:

package org.demo.java.future;

import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.concurrent.FutureTask;

/**
 * 
 * @author aayush.bhatnagar
 * 
 * Demo for Callable, Future and FutureTask.
 *
 */
public class FutureDemo 
{
 // This is our thread pool, which uses Java's internal thread pooling utilities.
 public static ExecutorService threadPool = Executors.newFixedThreadPool(3);

 public static void main (String[] args) throws InterruptedException, ExecutionException
 {
 // Demonstrating the Callable Interface usage.
 /*
 * The SomeTask.java class implements the Callable Interface, and is passed as an 
 * argument to the thread pool's submit method.
 * 
 * Please note, that the SomeTask class is not a Runnable type, but a Callable type.
 * 
 * Being a callable type means, that SomeTask can return a value once the task is executed by the
 * worker thread.
 */

 // The return value of the Callable, is stored here as a "Future" type.
 // In this particular example, we are expecting a String return type.
 Future<String> result = threadPool.submit(new SomeTask());
 // Getting the result as a Future type.
 String res = result.get();

 System.out.println("Result --> "+res);

 /*
 * A FutureTask class implements the Future interface. It provides for some
 * utility and control mechanisms on how the "Future" task would behave as follows - 
 */
 // Created a new future task with a Callable argument.
 FutureTask<String> task = new FutureTask<String>(new SomeTask());
 // Submitting to the thread pool..
 threadPool.submit(task);
 // Getting the result from the Future type (Future Task in this case).
 String res1 = task.get();

 System.out.println("Result --> "+res1);

 }

}

The SomeTask.java class which implements the Callable interface:

package org.demo.java.future;

import java.util.concurrent.Callable;

/**
 * 
 * @author aayush.bhatnagar
 * 
 * This class implements the Callable interface and
 * provides the implementation to the call ( ) method.
 *
 */
public class SomeTask implements Callable<String>
{

 @Override
 public String call() throws Exception 
 {
 // Here we do some dummy work.
 System.out.println("processing....");
 try {
 Thread.sleep(200L);
 } catch (InterruptedException e) {

 e.printStackTrace();
 }
 System.out.println("processing complete..exiting..");

 return "processing is complete";
 }

}

Posted in Java, Uncategorized | Tagged: , , , , , , | 1 Comment »

 
Follow

Get every new post delivered to your Inbox.

Join 93 other followers