Future of Flex

Posted August 26, 2011 by flexplusjava
Categories: Uncategorized

Tags: , ,

Worth Reading it…….

http://blogs.adobe.com/flex/2011/08/flex-where-were-headed.html

Advertisements

Amazon Load Balancing and Auto Scaling

Posted November 17, 2010 by flexplusjava
Categories: Uncategorized

Amazon Load Balancing and Auto Scaling

Elastic Load Balancing

Elastic Load Balancing automatically distributes incoming application traffic across multiple Amazon EC2 instances. It enables you to achieve even greater fault tolerance in your applications, seamlessly providing the amount of load balancing capacity needed in response to incoming application traffic. Elastic Load Balancing detects unhealthy instances within a pool and automatically reroutes traffic to healthy instances until the unhealthy instances have been restored. We can enable Elastic Load Balancing within a single Availability Zone or across multiple zones for even more consistent application performance.

ELB Architecture

ELB routes traffic to your instances you register to be included with ELB.  The ELB instance has it’s own IP address and public DNS name.

One thing to keep in mind is that the requests are balanced between different availability zones and then evenly between the instances of that zone.  So if you have 10 instances in us-east-1a and 5 instances in us-east-1b your us-east-1b instances will service twice as much traffic per instance.  For that reason it is suggested that you keep your number of instances in each zone roughly equal.

Features of Elastic Load Balancing

  • Using Elastic Load Balancing, we can distribute incoming traffic across your Amazon EC2 instances in a single Availability Zone or multiple Availability Zones. Elastic Load Balancing automatically scales its request processing capacity in response to incoming application traffic.
  • Elastic Load Balancing can detect the health of Amazon EC2 instances. When it detects unhealthy load-balanced Amazon EC2 instances, it no longer routes traffic to those Amazon EC2 instances instead spreading the load across the remaining healthy Amazon EC2 instances.
  • Elastic Load Balancing supports the ability to stick user sessions to specific EC2 instances.
  • Elastic Load Balancing metrics such as request count and request latency are reported by Amazon CloudWatch.

Amazon EC2 Auto Scaling

What is Auto Scaling?

Amazon Auto Scaling is an easy-to-use web service designed to automatically launch or terminate EC2 instances based on user defined triggers. Users can set up Auto Scaling groups and associate triggers with these groups to automatically scale computing resources based on parameters such as bandwidth usage or CPU utilization. Auto Scaling groups can work across multiple Availability Zones – distinct physical locations for the hosted EC2 instances – so that if an Availability Zone becomes unhealthy or unavailable, Auto Scaling will automatically re-distribute applications to a healthy Availability Zone.

Auto Scaling allows you to automatically scale your Amazon EC2 capacity up or down according to conditions you define. With Auto Scaling, you can ensure that the number of Amazon EC2 instances you’re using scales up seamlessly during demand spikes to maintain performance, and scales down automatically during demand lulls to minimize costs. Auto Scaling is particularly well suited for applications that experience hourly, daily, or weekly variability in usage. Auto Scaling is enabled by Amazon CloudWatch and available at no additional charge beyond Amazon CloudWatch fees.

Features of Auto Scaling

  • Auto Scaling enables you to set conditions for when you want to scale up or down your Amazon EC2 usage. When one of the conditions is met, Auto Scaling automatically applies the action you’ve defined.
  • Auto Scaling enables your application to scale up Amazon EC2 instances seamlessly and automatically when demand spikes.
  • Auto Scaling allows you to automatically shed unneeded Amazon EC2 instances and save money when demand subsides.
  • Auto Scaling is enabled by Amazon CloudWatch and carries no additional fees.
  • If you’re signed up for the Amazon EC2 service, you’re already registered to use Auto Scaling and can begin using the feature via the Auto Scaling APIs or Command Line Tools.

Benefits of Amazon Auto Scaling

The core benefits of Auto Scaling:

Elastic Capacity—Automatically add compute capacity when application usage rises and remove itwhen usage drops

Cost Saving—Save compute costs by terminating underutilized instances automatically andlaunching new instances only on demand

Geographic Redundancy and Scalability—Automatically distribute, scale and balance applications over a wide geographic area using multiple Availability Zones.

Easier Maintenance—Automatically replace lost or unhealthy instances based on pre-defined triggers and thresholds

Ease of Use—Manage your instances spread across either one or several availability zones as a single entity, using simple command-line tools or programmatically via an easy to use web service API

Common Uses for Auto Scaling

Common Uses for Auto Scaling

Automatically Scaling Your Amazon EC2 Fleet

Auto Scaling enables you to closely follow the demand curve for your applications, reducing the need to provision Amazon EC2 capacity in advance. For example, you can set a condition to add new Amazon EC2 instances in increments of 3 instances to the Auto Scaling Group when the average CPU utilization of your Amazon EC2 fleet goes above 70 percent; and similarly, you can set a condition to remove Amazon EC2 instances in the same increments when CPU Utilization falls below 10 percent. Often, you may want more time to allow your fleet to stabilize before Auto Scaling adds or removes more Amazon EC2 instances. You can configure a cool-down period for your Auto Scaling Group, which tells Auto Scaling to wait for some time after taking an action before it evaluates the conditions again. Auto Scaling enables you to run your Amazon EC2 fleet at optimal utilization.

Maintaining Your Amazon EC2 Fleet at a Fixed Size

If you’re sure you want to run a fixed number of Amazon EC2 instances, Auto Scaling helps ensure you’ll always have that number of healthy Amazon EC2 instances available and running. You can create an Auto Scaling Group and set a condition that your Auto Scaling Group will always contain this fixed number of instances. Auto Scaling evaluates the health of each Amazon EC2 instance in your Auto Scaling Group and automatically replaces unhealthy Amazon EC2 instances to keep the size of your Auto Scaling Group fixed. This ensures that your application is getting the compute capacity you expect.

Auto Scaling with Elastic Load Balancing

Let’s say that you want to make sure that the number of healthy Amazon EC2 instances behind an Elastic Load Balancer is never fewer than two. You can use Auto Scaling to set this condition, and when Auto Scaling detects that this condition has been met, it automatically adds the requisite amount of Amazon EC2 instances to your Auto Scaling Group. Or, if you want to make sure that you add Amazon EC2 instances when latency of any one of your Amazon EC2 instances exceeds 4 seconds over any 15 minute period, you can set that condition, and Auto Scaling will take the appropriate action on your Amazon EC2 instances — even when running behind an Elastic Load Balancer. Auto Scaling works equally well for scaling Amazon EC2 instances whether you’re using Elastic Load Balancing or not.

To set up an auto-scaled, load balanced EC2 application following steps are required :

1) CreateLoadBalancer: An LoadBalancer is represented by a DNS name and provides the single destination to which all requests intended for your application should be directed. This Step is optional if you are not using LoadBalanced Application

Call CreateLoadBalancer with the following parameters :

§ AvailabilityZones = us-east-1a

§ LoadBalancerName = MyLoadBalancer

§ Listeners = lb-port=80,instance-port=8080,protocol=H

2)    CreateLaunchConfiguration: A LaunchConfiguration captures the parameters necessary to create new EC2 Instances. Only one launch configuration can be attached to an AutoScalingGroup at a time. When you attach a new or updated launch configuration to your AutoScalingGroup, any new instances will be launched using the new configuration parameters.

Call CreateLaunchConfiguration with the following parameters :

o Comuplsory Parameters :

§ ImageId = Unique ID (also called as AMI ID) of the Amazon Machine Image (AMI) which was assigned during registration.(We can get the same while bundling the AMI). Eg: ami-f7c5219e

§ LaunchConfigurationName = Name of the launch configuration to create. Eg : MyLaunchConfiguration.

§  InstanceType = This specifies the instance type of the EC2 instance. Eg : m1.small

o   Optional Parameters :

§ KeyName = The name of the EC2 key pair.

§   SecurityGroups = Names of the security groups with which

to associate the EC2 instances.

§   UserData = The user data available to the launched EC2 instances.

§   KernelId = ID of the kernel associated with the EC2 AMI.

§   RamdiskId = ID of the RAM disk associated with the EC2 AMI.

§   BlockDeviceMappings = Specifies how block devices are exposed to the instance. Each mapping is made up of a VirtualName and a DeviceName.

3)    CreateAutoScalingGroup: An AutoScalingGroup is a representation of an application running on multiple Amazon Elastic Compute Cloud (EC2) instances. The AutoScalingGroup can be used to automatically scale the number of instances or maintain a fixed number of instances.

Call CreateAutoScalingGroup with the following parameters :

o Comuplsory Parameters :

§  AutoScalingGroupName = Name of AutoScalingGroup.

§  AvailabilityZones = List of Availability Zones for the group.

§  LaunchConfigurationName = Name of launch configuration to use with group.

§  MinSize = Minimum size of group.

§  MaxSize = Maximum size of the group.

o   Optional Parameters :

§  LoadBalancerNames = List of LoadBalancers to use.

§  Cooldown = The amount of time after a scaling activity completes before any further trigger-related scaling activities can start. Time is in seconds.

4)    CreateOrUpdateScalingTrigger: In Auto Scaling, the trigger mechanism uses defined metrics and thresholds to initiate scaling of AutoScalingGroups.

Call CreateOrUpdateScalingTrigger with following Parameters.

o   Comuplsory Parameters :

§  AutoScalingGroupName = The name of the AutoScalingGroup to be associated with the trigger.

§  Dimensions = A list of dimensions associated with the metric used by the trigger to determine whether to fire.

§  MeasureName = The measure name associated with the metric used by the trigger to determine when to fire; for example, CPU, network I/O, or disk I/O. Valid Values: CPUUtilization | NetworkIn | NetworkOut | DiskWriteOps | DiskReadBytes | DiskReadOps | DiskWriteBytes

§  Statistic = The statistic that the trigger uses when fetching metric statistics to examine. Valid Values: Minimum | Maximum | Sum | Average

§  Period = The period associated with the metric statistics in seconds. Constraints: must be a multiple of 60

§  TriggerName = The name for this trigger. Constraints: Must be an alphanumeric string. Must be unique within the scope of the associated AutoScalingGroup.

§  LowerThreshold = The lower limit for the metric. If all data points in the last BreachDuration seconds exceed the upper threshold or fall below the lower threshold, the trigger activates.

§  LowerBreachScaleIncrement = The incremental amount to use when performing scaling activities when the lower threshold has been breached. Constraints: Must be a positive or negative integer followed by a % sign. Note : If you specify only a positive or negative number, then the AutoScalingGroup increases or decreases by the specified number of actual instances. If you specify a positive or negative number with a percent sign, the AutoScaling group increases or decreases by the specified percentage.

§  UpperThreshold = the upper limit for the metric. If all data points in the last BreachDuration seconds exceed the upper threshold or fall below the lower threshold, the trigger activates.

§  UpperBreachScaleIncrement = The incremental amount to use when performing scaling activities when the upper threshold has been breached. Note If you specify only a positive or negative number, then the AutoScalingGroup will increase or decrease by the specified number of actual instances. If you specify a positive or negative number with a percent sign, the AutoScaling group will increase or decrease by the specified percentage.

§  BreachDuration = the amount of time in seconds used to evaluate and determine if a breach is occurring. The service will look at data between the current time and the number of seconds specified in this parameter to see if a breach has occurred. Constraints: Must be a multiple of 60

o   Optional Parameters :

§  Unit = the standard unit of measurement for a given measure that the trigger uses when fetching metric statistics to examine. Valid Values: Seconds | Percent | Bytes | Bits| Count | Bytes/Second | Bits/Second | Count/Second | None.

§  CustomUnit = the user-defined custom unit for a given measure. This is used by the trigger when fetching the metric statistics it uses to determine whether to activate. Note Custom units are currently not available.

Projecting Costs

Auto Scaling is enabled by Amazon CloudWatch and carries no additional fees. Each instance launched by Auto Scaling is automatically enabled for monitoring and the Amazon CloudWatch monitoring charge will be applied. Partial hours are billed as full hours. Regular Amazon EC2 service fees apply and are billed separately

Load Balancing Wowza Media Server

Posted November 17, 2010 by flexplusjava
Categories: Uncategorized

Load Balancer provides system for load balancing between multiple Wowza Pro servers. It leverages the IServerNotify (ServerListener) interface in Wowza Pro. Each edge server is configured to use the ServerListenerLoadBalancerSender ServerListener class. We will call these servers the “edge” servers. Edge servers will periodically (about every 2.5 seconds) send load and status information over UDP (you must add UDP port to firewall exception) to a single (or multiple) Wowza Pro server running the ServerListenerLoadBalancerListener ServerListener. We will call this server the “load balancer”. The load balancer will keep track of the load and availability of each of the edge servers to which it is communicating.
When a Flash client wishes to communicate with one of the edge servers, it will first make a request to the load balancer to get the address of the least loaded edge server. The Flash client will then connect directly to this edge server. There are currently two ways for a Flash client to obtain the address of the least loaded edge server. The first method is to make a NetConnection.connect() request to an application on the load balancer that is running the ModuleLoadBalancerRedirector module. The ModuleLoadBalancerRedirector module rejects this connection request with a info.code of “NetConnection.Connect.Rejected”. The redirect url will be contained in the info.ex.redirect field. The second method is to make an HTTP request to the load balancer from Flash using the URLLoader class. The load balancer will return the redirect host name or ip address in the response to this request.
The load balancing mechanism is dynamic. Each time a new edge server is started or stopped it will communicate with the load balancer to send its current status. This makes it very easy to add and remove servers from the pool of available edge servers simply by starting or stopping an edge server. There is also a Java and JMX API for temporarily removing an edge server from the pool.

Steps to Setup Load Balancer and Origin Server:
·        Copy the file wms-plugin-loadbalancer.jar from this zip archive to the [install-dir]/lib/ folder of the Wowza Pro server

  • Copy the file conf/crossdomain.xml from this zip archive to the [install-dir]/conf/ folder of the Wowza Pro server.
  • Edit [install-dir]/conf/Server.xml and make the following changes

Add the following ServerListener entry to the <ServerListeners> list

    <ServerListener>
    <BaseClass>com.wowza.wms.plugin.loadbalancer.ServerListenerLoadBalancerListener</BaseClass>
    </ServerListener>
  • Add the following properties to the <Properties> section at the bottom of Server.xml:
  • <Property>

    <Name>loadBalancerListenerKey</Name>

    <Value>023D4FB4IS83</Value>

    </Property>

    <Property>

    <Name>loadBalancerListenerIpAddress</Name>

    <Value>*</Value>

    </Property>

    <Property>

    <Name>loadBalancerListenerPort</Name>

    <Value>1934</Value>

    <Type>Integer</Type>

    </Property>

    <Property>

    <Name>loadBalancerListenerRedirectorClass</Name>

    <Value>com.wowza.wms.plugin.loadbalancer.LoadBalancerRedirectorConcurrentConnects</Value>

    </Property>

    <Property>

    <Name>loadBalancerListenerMessageTimeout</Name>

    <Value>5000</Value>

    <Type>Integer</Type>

    </Property>

  • Edit [install-dir]/conf/VHost.xml and replace the HostPort/HTTPProvider with the following XML snippet
    <HTTPProvider> 

    <BaseClass>com.wowza.wms.plugin.loadbalancer.HTTPLoadBalancerRedirector</BaseClass>

    <Properties>

    <Property>

    <Name>enableServerInfoXML</Name>

    <Value>true</Value>

    <Type>Boolean</Type>

    </Property>

    </Properties>

    </HTTPProvider>

  • Edit [install-dir]/conf/applications/myapp/Application.xml file and set stream type to “liverepeater-origin”.

Steps To setup Edge Server :

  • Copy the file wms-plugin-loadbalancer.jar from this zip archive to the [install-dir]/lib/ folder of the Wowza Pro server.  Also add wms-plugin-amazonaws.jar if not available.(Optional needed if you are using amazon server)
  • Edit [install-dir]/conf/applications/myapp/Application.xml file and set stream type to “liverepeater-edge”.
  • Edit [install-dir]/conf/Server.xml and make the following changes.

Add the following ServerListener entry to the <ServerListeners> list:

    <ServerListener><BaseClass>com.wowza.wms.plugin.loadbalancer.ServerListenerLoadBalancerSender</BaseClass> 

    </ServerListener>

    <ServerListener>

    <BaseClass>com.wowza.wms.plugin.amazonaws.ec2.env.ServerListenerEC2Variables</BaseClass>

    </ServerListener>

    Add the following properties to the <Properties> section at the bottom of Server.xml:

    <Property>

    <Name>loadBalancerSenderTargetPath</Name>

    <Value>${com.wowza.wms.AppHome}/conf/loadbalancertargets.txt</Value>

    </Property>

    <Property>

    <Name>loadBalancerSenderRedirectAddress</Name>

    <Value>${com.wowza.amazonaws.ec2.AWSEC2_METADATA_PUBLIC_IPV4}</Value>

    </Property>

    <Property>

    <Name>loadBalancerSenderMonitorClass</Name>

    <Value>com.wowza.wms.plugin.loadbalancer.LoadBalancerMonitorDefault</Value>

    </Property>

    <Property>

    <Name>loadBalancerSenderMessageInterval</Name>

    <Value>2500</Value>

    <Type>Integer</Type>

    </Property>

Where [redirect-address] is the external ip address or domain name of this machine. This address will be used when redirecting to this edge server. When using this system on EC2 you can set the [redirect-address] to ${com.wowza.amazonaws.ec2.AWSEC2_METADATA_PUBLIC_IPV4} and upon server startup it will use the public ip address of the server for this value.

  • Create the file [install-dir]/conf/loadbalancertargets.txt using a text editor and enter the following two lines (the first line is a comment):

# [load-balancer-ip-address],[load-balancer-port],[encryption-key]

[load-balancer-ip-address],1934,023D4FB4IS83

  • Set stream type as “liverepeater-edge” in Application.xml file and uncomment the line,
    <Repeater> 

    <OriginURL>rtmp://192.168.1.72</OriginURL>

    <QueryString></QueryString>

    </Repeater>

  • Also set AutoAccept to true if not.

Where [load-balancer-ip-address] is the ip address or domain name of the load balancer.

This configurations uses UDP port 1934 for communication between the edge servers and the load balancer. Be sure this port is open on your firewall. All communication between the edge server and the load balancer is encrypted and signed. The encryption key is set on the load balancer server using the loadBalancerListenerKey property and in the loadbalancertargets.txt file on the edge servers. These keys must match. An edge server can communicate with multiple load balancers by adding additional lines to the loadbalancertargets.txt file.

You can now startup the load balancer and multiple edge servers. If functioning properly, the edge servers will update the load balancer every 2.5 seconds with status and load information. You can get information from the load balancer in regards to which edge servers are currently registered and their status by opening a web browser and entering the following url:

http://%5Bload-balancer-ip-address%5D:1935/?serverInfoXML

Where [load-balancer-ip-address] is the ip address or domain name of the load balancer. It will return an XML document contains detailed information on each of the edge servers. Once you have your load balancing server up and running and in a production environment, you may wish to turn off this query interface. You can do this by setting the HTTPProvider/Properties/Property enableServerInfoXML in [install-dir]/conf/VHost.xml to false.

Get least loaded server using http

One of the methods to get the least loaded server from the load balancer is to make a request to the load balancer over http. The url for this request is:

http://%5Bload-balancer-ip-address%5D:1935

Where [load-balancer-ip-address] is the ip address or domain name of the load balancer. This request will return the ip address of the least loaded server in the form “redirect=[ip-address]”.

Get least loaded server using NetConnection redirect

You can also get the least loaded server by configuring an application on the load balancer that uses the ModuleLoadBalancerRedirector module. To setup an application that uses this module follow these steps:

  • Create the folder [install-dir]/applications/redirect.
  • Create the folder [install-dir]/conf/redirect and copy the file [install-dir]/conf/Application.xml into this new folder.
  • Edit the newly copied Application.xml file and set stream type to “liverepeater-edge”
  • Edit the newly copied Application.xml file and add the following module entry as the last entry in the modules list:
    <Module> 

    <Name>ModuleLoadBalancerRedirector</Name>

    <Description>ModuleLoadBalancerRedirector</Description>

    <Class>com.wowza.wms.plugin.loadbalancer.ModuleLoadBalancerRedirector</Class>

    </Module>

  • Add the following properties the properties section at the bottom of the Application.xml file:
    <Property> 

    <Name>redirectAppName</Name>

    <Value>[application-name]</Value>

    </Property>

    <!–

    <Property>

    <Name>redirectPort</Name>

    <Value>[redirect-port]</Value>

    </Property>

    –>

    <!–

    <Property>

    <Name>redirectScheme</Name>

    <Value>rtmp</Value>

    </Property>

    –>

    <Property>

    <Name>redirectOnConnect</Name>

    <Value>true</Value>

    <Type>Boolean</Type>

    </Property>

Where [application-name] is the name of the application you wish to redirect to on the edge server and [redirect-port] is the port to redirect to (such as port 1935 or port 80). The redirectPort and redirectScheme are commented out so that the system will use the same scheme and port used to connect to the load balancer to connect to the edge server. This will work better when using any type of protocol (rtmp to rtmpt) or port rollover scheme.

Configuring Apache HTTP server and Tomcat with mod jk

Posted November 17, 2010 by flexplusjava
Categories: Uncategorized

 

Steps :

1) Install Apache 2.2 (Installtion Guide),tomcat

2) Download apprppriate mod_jk from http://tomcat.apache.org/connectors-doc/

3) The Apache web server is often used in front of an application server to improve performance in high-load environments. Mod_jk allows request forwarding to an application via a protocol called AJP. Configuration of this involves enabling mod_jk in Apache, configuring a AJP connector in your application server, and directing Apache to forward certain paths to the application server via mod_jk.

Mod_jk is sometimes preferred to mod_proxy because AJP is a binary protocol, and because some site administrators are more familiar with it than with mod_proxy..

The configuration below assumes your pravindemo instance is accessible on the same path on the application server and the web server. For example:

Externally accessible (web server) URL http://www.example.com/pravindemo/
Application server URL (HTTP) http://app-server.internal.example.com:8080/pravindemo/

The AJP connection of the application server is set to: app-server.internal.example.com:8009.

Configuring mod_jk in Apache

The standard distribution of Apache does not include mod_jk. You need to download it from the JK homepage and put the mod_jk.so file in your Apache modules directory.

Next, add the following in httpd.conf directly or included from another file:

# Put this after the other LoadModule directives
LoadModule jk_module modules/mod_jk.so

# Put this in the main section of your configuration (or desired virtual host, if using Apache virtual hosts)
JkWorkersFile conf/workers.properties
JkLogFile logs/mod_jk.log
JkLogLevel info

JkMount /pravindemo worker1
JkMount / pravindemo /* worker1

Configuring workers.properties

Create a new file called ‘workers.properties’, and put it in your Apache conf directory. (The path for workers.properties was one of the configuration settings above.)

worker.list=worker1

worker.worker1.host=app-server.internal.example.com
worker.worker1.port=8009
worker.worker1.type=ajp13

Tomcat Configuration

In Tomcat 5, the AJP connector is enabled by default on port 8009. An absolutely minimal Tomcat server.xml is below for comparison. The relevant line is the Connector with port 8009 – make sure this is uncommented in your server.xml.

<Server port="8005" shutdown="SHUTDOWN">
  <Service name="Catalina">

    <!-- Define a HTTP/1.1 Connector on port 8080 -->
    <Connector port="8080" />

    <!-- Define an AJP 1.3 Connector on port 8009 -->
    <Connector port="8009" protocol="AJP/1.3" />

    <Engine name="Catalina" defaultHost="localhost">
      <Host name="localhost" appBase="webapps">
        <Context path="/pravindemo" docBase="/opt/webapps/pravindemo"/>
        <Logger className="org.apache.catalina.logger.FileLogger"/>
      </Host>
    </Engine>
  </Service>
</Server>

Points to note:

  • the Connector on port 8009 has protocol of “AJP/1.3”. This is critical.
  • the Context path of the pravindemo  application is “/ pravindemo “. This must match the path used to access pravindemo on the web server.
  • we recommend keeping your application Contexts outside the server.xml in Tomcat 5.x. The above example includes them for demonstration only.

Improving the performance of the mod_jk connector

The most important setting in high-load environments is the number of processor threads used by the Tomcat AJP connector. By default, this is 200, but you should increase it to match Apache’s maxThreads setting (256 by default):

<Connector port="8009" minSpareThreads="5" maxThreads="256" protocol="AJP/1.3" />

All the configuration parameters for the AJP connector are covered in the Tomcat documentation.

Type Casting and ObjectUtil.copy(obj) in ActionScript 3.0

Posted May 7, 2010 by flexplusjava
Categories: Flex Gotchas

When programming in Object Oriented Languages, in this case ActionScript 3.0, having different copies of the same object is sometimes a common need.
For many, the most obvious approach would be to simply assign one object to another, like this:

myObj2 = myObj1;

But this approach would not copy the objects. Instead this approach would only copy the object references. In the end there will be only one object, which would be referenced from two places, meaning that changing myObj1 is the same as changing myObj2 and vice-versa.
By the way, in Flex everything is a reference!

The question is how do we clone an object in ActionScript 3.0?

ActionScript 3.0 has several utility functions, and one of them is the ObjectUtil.copy(Obj) which can be found in the package mx.utils;

This function will perform a deep copy of the object that is given as argument. And this is done by using the built in flash player AMF capabilities (yes, in Flex you will find really odd and original solutions for problems). What happens here is that the entire object is serialized into an array of bytes, and when the bytes are deserialized, a brand new object is created copying all of the original contents.
Still, you won’t be able to type cast most of the resulting objects (in fact, only primitive types can be type casted). And this is because when an object is deserialized from AMF, although it haves all the properties of a class instance, it will not be a true class instance nor will hold any references to the original class.
This is solved by adding type information about the object to the AMF packet. This can be done by using the registerClassAlias() method which is available in the flash.net package. This method will allow to preserve the class (type) of an object when the object is encoded in Action Message Format (AMF).

Let’s take a look to the following source code:

public var myObj:MyObject = new MyObject();
myObj.someProperty = “myProperty”;

public var myObjCopy:Object = ObjectUtil.copy(myObj);
trace(myObj.someProperty): // “myProperty”
trace(myObjCopy.someProperty): // “myProperty”
myObjCopy.someProperty = “myChangedProperty”;
trace(myObj.someProperty); // “myProperty”
trace(myObjCopy.someProperty); // “myChangedProperty”

public var myObj:MyObject = new MyObject();
myObj.someProperty = “myProperty”;
public var myObjCopy:MyObject = ObjectUtil.copy(myObj) as MyObject;
trace(myObj.someProperty): // “myProperty”
trace(myObjCopy.someProperty);

//FAULT: cannot access object with null reference.
//Execution stops!

public var myObj:MyObject = new MyObject();
myObj.someProperty = “myProperty”;
registerClassAlias(“my.package.myObject”,MyObject);
public var myObjCopy:MyObject = ObjectUtil.copy(myObj) as MyObject;
trace(myObj.someProperty):   // “myProperty”
trace(myObjCopy.someProperty):  // “myProperty”
myObjCopy.someProperty = “myChangedProperty”;
trace(myObj.someProperty);  // “myProperty”
trace(myObjCopy.someProperty); // “myChangedProperty”

Why does it work on A and C and not on B?

Well, in A it works because we are using the type Object. And it would also for other primitive types such as Array.
In C it works because we are passing (registerClassAlias) the object type before it gets encoded, which will result in a successful type cast.
In B we don’t preserve the objects type, meaning that after the object gets deserialized it will hold no references to the MyObject class. The result will be a failed type cast which will result in pointing out to a null object reference.

But… ObjectUtil.copy(obj) it’s not bullet proof. I will not address to this situation because Darron schall has already a great post on this issue.

Custom Validator

Posted April 23, 2009 by flexplusjava
Categories: Uncategorized

After googling I have decided to do by own an Custom Validator which will shows error tip if validation fails.

By default flex validator shows error tip on mouse over.

It shows only red border when validation fails without error tip , but i dont think that will be understandable to user.

So I have developed custom validator , here is code for that..
package
{
import flash.events.MouseEvent;
import flash.geom.Point;

import mx.controls.TextInput;
import mx.controls.ToolTip;
import mx.events.ValidationResultEvent;
import mx.managers.ToolTipManager;
import mx.validators.Validator;

public class CustomValidator extends Validator
{
public function CustomValidator()
{
super();
addEventListener(ValidationResultEvent.INVALID,handleInvalid);
addEventListener(ValidationResultEvent.VALID,handleValid);
}
override public function set source(value:Object):void{
super.source = value;
if(source){
source.addEventListener(MouseEvent.ROLL_OVER,handleMouseOver);
source.addEventListener(MouseEvent.ROLL_OUT,handleMouseOver);
source.addEventListener(MouseEvent.MOUSE_OVER,handleMouseOver);
source.addEventListener(MouseEvent.MOUSE_OUT,handleMouseOver);
source.addEventListener(MouseEvent.MOUSE_MOVE,handleMouseOver);
}
}
private var errorTip:ToolTip ;
private function handleInvalid(event:ValidationResultEvent):void{
if(!errorTip){
var pt:Point = new Point(source.x,source.y);
// convert local points to global co-ordiante systems point
pt = (source as TextInput).localToGlobal(pt);
var startX:Number = pt.x + source.width + 5;
var startY:Number = pt.y;
errorTip = ToolTipManager.createToolTip(requiredFieldError, startX, startY) as ToolTip;
errorTip.x = startX;
errorTip.setStyle("styleName", "errorTip");
errorTip.visible = true;
}else{
if(!errorTip.visible)
errorTip.visible = true;
}
}
private function handleValid(event:ValidationResultEvent):void{
//once the valid event fires destroy the tooltip so that it will be garbage collected
if(errorTip && errorTip.visible){
ToolTipManager.destroyToolTip(errorTip);
errorTip = null;
}
}
private function handleMouseOver(event:MouseEvent):void{
//here for time being I have done a workaround for stopping the display default error tip
//may be this can be handled in better way.
event.stopImmediatePropagation();
}
}
}

You can try this code and please revert back for any queries.

Posted January 17, 2009 by flexplusjava
Categories: Uncategorized

Hello, it’s nice to see you. While you’re here you can read the blog , find out helpfull links, about me, or just get in contact. It’s early days and i’m still tweaking the code and style so please hang on in there!