Archive for February 2012
Alcatel-Lucent (NYSE: ALU) moved past Juniper Networks (NYSE: JNPR) in terms of carrier infrastructure market share in the fourth quarter of 2011, a new study says, only the second time in the past four years Alcatel-Lucent has pulled ahead into second place behind perennial leader Cisco (Nasdaq: CSCO).
The report, from Synergy Research Group, attributed Alcatel-Lucent’s surge to a solid performance in the EMEA region, where it saw strong revenue growth from the previous quarter. Juniper’s struggles in the past quarter allowed Alcatel-Lucent to take a clear market share lead of 2.7 percentage points in the quarter. For the full year Juniper did maintain its number two ranking, with a market share of 18.2 percent versus 16.8 percent for Alcatel-Lucent.
Jeremy Duke, founder and chief analyst of Synergy, said Alcatel-Lucent and Juniper both nevertheless saw strong growth in the EMEA region, recording market shares of more than 25 percent and 22 percent respectively in the final quarter.
Synergy’s 4Q11 Carrier Infrastructure Market Share report provides quarterly market shares for service provider core routers, edge routers and carrier Ethernet switches.
Overall carrier infrastructure revenues hovered around the $3 billion mark for the third successive quarter, with full-year revenues inching ahead 3.5 percent from 2010. But the fourth quarter wasn’t kind to the segment; revenues were actually down almost 10 percent from a year ago.
“However, there was plenty of good news for Cisco in the quarter, including a market share gain in the high-growth APAC region, increasing its share of the worldwide service provider core router market to almost 65 percent, and increasing its share of a declining North American edge router and switch market–to levels it hasn’t achieved in over three years,” said Duke.
For more: – see this release
Related articles: Cisco plans more M&As as strong Q2 tops expectations Soft telecom outlook cuts Juniper Q1 guidance as Q4 profit nosedives Cisco takes back share from HP in Ethernet switching market Cisco takes back IP edge router market share lead in Q3 HP, Cisco battle prompts drop in Ethernet switch revenues
The Cisco Technical Assistance Center (TAC) is Cisco’s tech support center, and they’ve got some very talented people working there. before you pick up the phone to call the TAC, though, you should do your best to resolve the problem and document each step along the way. One of the first things the TAC will do is ask what you’ve done so far to resolve the issue, and giving them an accurate answer is a huge step toward getting the problem resolved.
If you’re used to tech support where calls are prioritized by the order in which they come in, the TAC will open your eyes Cisco’s TAC uses four priority levels to determine which cases should be handled first:
Priority 1: the network is down, no workaround available, and business processes are at a critical stage.
Priority 2: the network is badly degraded, business processes are impacted, and no workaround is possible.
Priority 3: Network is degraded, but business processes are working for the most part.
Priority 4: Basic support call for installation or configuration, or information on Cisco products.
Once you open a case with the TAC, you can check the status of your cases online with the Cisco TAC Case Query tool. This is an interactive tool which allows you to update the status of the case from your end without placing additional phone calls to TAC. It’s also a faster way to let your TAC engineer know how your cases are proceeding.
Gather all the information you can BEFORE you call tech support. Don’t just pick up the phone without investigating the issue.
This goes double for opening TAC calls. Some good practices to follow before calling TAC:
Document changes made before the issue arose.
Document changes made to configs AFTER the trouble occurred.
Document ping and traceroute results.
Make sure the log is large enough to keep errors and console messages that are resulting from the issue.
Depending on the technology, test everything you can before calling tech support. For Frame Relay, for instance, check everything between your site and the DCE closest to the site. (Trust me, that’s the first thing they’re going to ask you to do.) For ISDN, check the configurations carefully, and again check everything between your site and the provider.
Does your Windows XP system run slower and slower after being used for some time? it is inevitable that Windows will start running slow if you don’t maintain it properly. This article is going to tell you how to optimize your Windows system and make your computer run faster.
If there is something that can affect PC performance dramatically, it must be registry. Registry is the data centre of a Windows system. it stores the vital data and parameters of hardware drivers, software and system programs that have been installed on your computer. the data and settings define every operation of Windows and programs such as startup, loading files, shutdown and so on.
But registry is extremely vulnerable. Program un-installation, invalid operations by you, virus and other issues can easily bring damage to it. If registry is corrupt or damaged, it will lead to PC performance decrease and various pc errors like freezes, program not responding, blue screen and so on. to optimize Windows system, you need to repair registry every month.
Clean up Junk Files
When Windows is running, it creates a lot of temporary files. But when it shuts down, some of the temporary files are left in the system disk. the files are not used any more. But they have a negative effect on your computer. they slow down the data access speed to system disk and the computer running speed. to speed up your computer, you need to clean the junk files completely from your disk.
Handle other Issues
There are other issues that can slow down Windows system. to optimize your system, you should tweak RAM, clean up redundant DLL files, disable unnecessary startup items and so on. normally, it hard for ordinary people to cope with all the negative issues manually. You can use a optimization tool to help you. it is the easiest way to optimize a PC.
In an effort to accommodate enterprise users looking to implement private and hybrid clouds, Cisco in the coming months will unveil an “integrated” WAN routing system of existing, but enhanced, products.
Cisco’s Integrated Enterprise WAN Solution (IEWS) is comprised of its ASR 1000 edge router, ISR branch routers and WAAS WAN optimization appliances. It’s designed to allow enterprises to connect to the cloud with an application aware infrastructure that is simple to provision and manage, says Praveen Akkiraju, senior vice president and general manager of Cisco’s Network Services Technology Group, which oversees enterprise routing.
“We ‘re leveraging our extensive network footprint and credibility to connect [users] to cloud assets,” Akkiraju says.
He would not go into specific detail on the cloud-optimized enhancements to the individual platforms within IEWS. but as an example, an ISR at a branch office could recognize traffic that’s destined to or coming from the cloud, route it to Cisco’s own ScanSafe SaaS-enabled security service for cleansing, and then to its destination.
IEWS is in customer trials now. Akkiraju says it will launch formally in the “next few months.”
IEWS is specifically optimized for cloud connectivity, but can also serve traditional infrastructure deployments, Akkiraju says, just as its individual piece parts always have. It’s intended to be customizable for the enterprise user’s specific environment, whether a private WAN, private cloud, or part of a hybrid cloud solution.
“Cloud is not a single product,” Akkiraju says.
Management of IEWS will be through Cisco Prime, he said, which debuted last spring.
Cisco’s share of the enterprise router market is 52% in the high end, and 84% in access routers, according to third-quarter 2011 data from Dell’Oro Group. the worldwide enterprise router market in the third quarter was $873 million, according to Dell’Oro.
Read more about lans & wans in Network World’s LANs & WANs section.
StratoGen News – UNITED KINGDOM – StratoGen, Europe’s leading VMware hosting provider today launched its cloud platform in the United States built on the same enterprise class components that has made its European offering so successful.
“We are bringing some real innovations to shake up the cloud hosting market in the US” commented Karl Robinson, Sales Director. “Our VMware hosting platform gives you the ability to go way beyond simply hosting virtual machines – you can build and configure firewalls, create routed or internal networks, vApps and much more from a single user interface. What’s more, our unique SHARPlock service provides a facility to add multiple layers of additional authentication, to ensure we are one of the most secure clouds in the industry.”
The underlying hardware of the StratoGen platform is equally impressive with storage provided on high performance NetApp SANs and compute facilities on the latest HP blades. an all Cisco network and 100% uptime service level agreement complete the picture.
David Elliott, Technical Director said “We’ve taken a no compromise approach to the build out of our VMware Hosting platform. Based on our extensive experience with these enterprise class components in Europe, I’m confident the performance levels and stability of the platform will exceed those of competing services in the US. as always, the StratoGen team of VMware qualified support staff will be available 24/7”
StratoGen are offering a free 7 day trial of the service and customers who sign up before the end of January 2012 can enjoy a 25% discount.
For further information on StratoGen VMware hosting please visit www.stratogen.com
About StratoGenStratoGen is a leading VMware hosting company with a worldwide client base and award winning cloud platform. StratoGen hosted VMware products include a 100% uptime service level guarantee.
Discuss, review, rate and learn more about web hosting at HostDiscussion.com.
This entry was posted on Friday, January 6th, 2012 at 9:21 am and is filed under Press Releases, Web Host News. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
For anyone who’s walking around with a ‘Siri’ enabled iPhone 4S, 2007 may seem like lost in haze of time. but it is in that year that the first iPhone was unveiled and went on to write history of smartphones.
The year 2011 has been a good one for the product category with stiff competition among vendors who are at the same time intent on creating models featuring ever higher screen resolutions and speedier quad-core processors and graphics processing units (GPUs). “
Technology is moving so quickly and cell phones are really going to be the thing that does everything eventually
Hooman Khalili, director of Olive
All of which will continue in 2012 and even lead to a year of "mobile ascendancy", according to IDC. Mobile devices will surpass PCs in both shipments and spending, while mobile apps, with 85 billion downloads, will generate more revenues than the mainframe market.
Article continues below
Put together, vendors will be able to build smartphones that will put in the shade all current offerings. the challenge will be to implement the hardware improvements in a way that won’t leave consumers even more disappointed with battery life than they are today.
While this year brought the arrival of the first smartphone equipped with LTE (Long Term Evolution) and NFC technology, 2012 will see those with improved battery performance.
It is reported that Apple’s iPhone in the future also intends to add NFC features. RIM, Nokia and Samsung have already introduced NFC-enabled models.
According to IMS Research, the number of NFC-enabled phones shipped in 2011 totalled 35 million. the enabling of other cellular handsets will drive that number to nearly 80 million by the end of 2012.
Despite ups and downs for some vendors and operating systems, larger screen sizes are one of the hardware trends which have prompted Samsung to come out with the Galaxy Note’s 5.3-inch screen. there are a host of products whose screens measure between 3.5 and 4.7 inches. the iPhone already has lots of pixels, but the screen size has remained at 3.5 inches. However, along with LTE, the next iPhone 5 is expected to have a bigger screen.
But that doesn’t mean smartphones have no room left for innovation. it can be argued that screen and the overall size have reached a practical maximum, which means vendors have to find another performance metric to entice users to pick up a new handset.
According to Ashraf Fawakherji, General Manager of Telecommunications Group at Samsung Gulf Electronics, the technology landscape will continue to change driven by factors such as a developed infrastructure, higher broadband speeds and growing consumer demand for the latest devices.
He said some of the trends that consumers can look forward to next year within the smartphone segment include changes in design, improved battery performance and screen sizes. In addition, 2012 will see the technical integration of various broadband technologies.
Higher resolution is a likely candidate. recent arrivals like the LG Nitro HD and the Galaxy Nexus have already made the leap to a screen resolution at 1280×720 pixels, and more are reportedly on the way.
Next year, the 720p resolution will be a standard high-end feature. many manufacturers are trying to devise methods to make small screens with a resolution of 720p.
Big screens have the unfortunate side effect of using a lot of power, but upcoming announcements will focus on advancements in making displays more energy-efficient, according to Geoff Blaber, an analyst at CCS Insight.
Weak processors have given way to dual-core powerhouses. Smartphone cameras are now so capable, the best of them make owning a point-and-shoot camera redundant. the top-motion animation film Gulps and Olive were shot using Nokia N8 phones with 12 mega pixel camera.
"Technology is moving so quickly and cell phones are really going to be the thing that does everything eventually," Hooman Khalili, director of Olive, had said.
But Fawakherji and general manager for Motorola Mobility, Middle East and Africa Raed Hafez echoed in the same voice that we believe the smartphone will become a converged device making lives easier, it should not aim to replace professional digital cameras, which are proving increasingly popular among consumers. the two products serve very different consumer needs.
And display resolutions exceed the limits of the human eye’s ability to distinguish separate pixels.
Clock speeds will also increase next year. but improved performance won’t come from just faster main processors.
The GPU will also play a more important role in upcoming products, according to Blaber. ARMs Mali-T604 is one of the GPUs that will be used in high-end smartphones next year. it can also use four cores and offers five times the performance of previous Mali graphics processors, according to ARM.
Higher clock speeds
Tegra 3, a four-core processor and the world’s first quad-core mobile chip, will bring about a qualitative improvement in multi-tasking, web browsing and applications performance.
"Qualcomm is developing quad-core Snapdragon processors, which will be ready in 2012," said Jay Srage, President of Middle East and Africa, Qualcomm.
"they will be part of the S4 class of Snapdragon processors. still, the number of cores is not what’s important when it comes to mobile device performance. What matters most is how you integrate all parts of the processor — the CPU, graphics processor, software and other components — and make them work together efficiently," Srage said.
"the smartphone industry is very much running in parallel to the computer industry. the fact that processors are moving from single core to eight cores on the chip will be repeated in the mobile industry but at a faster rate. To this end, we will absolutely see quad-core chips next year," Hafez said.
According to Fawakherji, higher clock speeds and the use of dual and quad core technology are definitely two areas of interest for smartphone manufacturers. By incorporating chipsets that offer higher processing speeds, smartphone vendors such are able to develop devices that maximise run-time efficiency leading to noticeably smoother, faster and longer multi-tasking than ever before resulting in a more enjoyable experience for consumers.
The smartphone industry will mirror the computer industry, Hafez said, adding that if we go back a few years, the computer industry was on a clock-speed race. "at one point, this stopped and there was a realisation that pushing the clock was no longer a differentiation or a means to make an impact with the consumer. Instead, there was a move towards multiple cores. We’re going to see the same thing happening for smartphones where, for example, a 1.GHz quad-core chip will be faster than a 2 GHz single core chip."
Following the launch of Siri, Google and Microsoft are no doubt scrambling to bring more voice controls to their respective smartphone platforms. as for Siri, there’s a slight chance that Apple will open up the virtual personal assistant to third-party apps in 2012. More likely, however, the company will expand Siri’s functionality in some fashion.
Augmented reality is another feature we have seen on a few apps here and there, but it will become a standard feature in the phones of tomorrow, as opposed to being limited to one-off apps such as Google Goggles or the Layar browser.
While smartphones will continue to improve noticeably in processor power, screen quality, and data speeds, battery life is likely to see only minor improvements compared to this year. the major technological breakthroughs that could keep users from worrying about getting through the day are still in the laboratory; so the best hope for better battery life lies in optimisation.
Smartphone trends for 2012 will be a highlight of the Consumer Electronics Show, which starts on January 10 in Las Vegas, and the Mobile World Congress, which takes place at the end of February in Barcelona.
Android rules the roost
Dubai: Android took the crown in the war of operating systems in 2011. but things are going to take a dramatic turn with the launch of Windows Phone 7.
With Nokia Lumia’s launch in major markets expected in the first quarter of next year, Windows market share will be ticking upwards from the current 5.6 per cent, behind Symbian in fifth place.
But by 2012 it will jump to 10.8 per cent, switching places with Symbian. and three years later in 2015, it will have surpassed iOS and Research in Motion, reaching 19.5 per cent market share, taking the No 2 spot in world market share.
Gartner predicts that RIM will fall from a 2010 peak market share of 16 per cent to 11.1 per cent by 2015. it also predicts that Apple iOS will rise this year, but fall by 2015.
Google’s Android will rise from a narrow lead of 22.7 per cent in 2010 to a dominant 49.2 per cent market share by 2012. Apple’s iOS will remain the second biggest platform worldwide through 2014 despite its share decreasing slightly after 2011.
Gartner analyst Roberta Cozza explained the Windows Phone growth, noting that Nokia will be able to sell Windows Phones at lower prices than many smartphones sold today and will sell them globally through an extensive sales channel.
Nokia will also retain its reputation for solid smartphone hardware in the Windows Phone alliance, she added. "all the strengths Nokia still has will play a role" in Windows Phone moving to second place in 2015, Cozza said.
With Apple’s iPhone 3GS sticking around for another year, we’ll probably see some strong competition in the sub-$50 (Dh183.50) range and not just with Android but Windows-enabled phones as well.
Microsoft CEO Steve Ballmer has said that "the cheapest phones will be Android and we are trying to lower the production cost of smartphones, thereby reducing the sales price".
"By 2015, 67 per cent of all open OS devices will have an average selling price of $300 or below, proving that smartphones have been finally truly democratised," said Cozza.
"as vendors delivering Android-based devices continue to fight for market share, price will decrease to further benefit consumers.
"Android’s position at the high end of the market will remain strong, but its greatest volume opportunity in the longer term will be in the mid- to low-cost smartphones, above all in emerging markets." 2012 trends: Data usage to gallop
A computer is a fairly complicated device containing lots of different parts. This month I am going to take a look at one of the parts that you will find inside of your computer case and give you some idea of what its various components do. Armed with this knowledge you will have a better idea of what you are looking at when next you go to buy a computer.
The most complicated part of the computer is the flat circuit board (called the motherboard) into which most of the other devices within the computer are plugged. We’ll start by considering what components are found on the motherboard and what part that they play in the operation of your computer.
There are a number of processing chips that are hard wired onto your motherboard. one of these is the BIOS chip (basic input output system) which contains the permanent memory that contains the instructions that tell your computer how to start up and load the operating system. The information contained within the BIOS chip can be updated to correctly reflect the setup of your computer by running a special program contained within the chip when you first start the computer. This program is accessed by pressing a special key combination (usually just the delete key) just after the computer first starts up. some motherboards have a second BIOS chip to provide additional protection against the content of the BIOS memory getting corrupted. Most modern BIOS chips also allow you to change the programs stored in the chip by running a special program. This is known as flashing the BIOS.
One chip on the motherboard needs to run constantly even when your computer is off. For this reason this chip is powered by a battery that you will also find on the motherboard. This chip is the RTC chip (real time clock) which keeps track of the current date and time.
The two biggest chips that you will find hard wired to your motherboard are known as the north bridge and south bridge chips. The north bridge chip is responsibe for controlling the central processing unit and all of the random access memory that are plugged into your motherboard. The south bridge chip controls most of the other devices on the motherboard such as the PCI bus which has most of the peripheral devices either built into the board and hence permanently connected or which can be plugged into the various slots that can be found along the back edge of the motherboard. Computer motherboard manufacturers have recently been working on redesigning the method that these two chips use to communicate with one another as the increasing speed of the many other components within the system mean that this has rapidly approached the point of becoming the bottleneck in communications within your computer. it is these two chips which between them control the communications between all of the other components in your system.
These days there are usually a number of computer sub-systems built into the motherboard which in the early computers had to be plugged in separately. in the earliest computers even the RTC chip had to be installed via a plug in card. Today the EIDE hard disk controller, the floppy drive controller, serial port controller, parallel port controller, and USB port (universal serial port) controllers are almost always built into the motherboard and run off of the PCI bus that is controlled by the south bridge chip. some computers even have integrated sound or networking and while this adds to the expense of the motherboard and makes upgrading more difficult this may be the ideal solution for a cheap business system. Other controllers are also occasionally integrated into motherboards eg. SCSI controller.
Also to be found on the motherboard are a number of sockets and slots that allow you to plug other components into the motherboard in order to convert it into a complete computer. Attached to the north bridge are the processor slot or socket where the CPU chip is plugged in as well as the memory sockets where the main memory gets plugged in. The design of these sockets (as well as the programming incorporated into the various chips on the motherboard) will determine the type and speed of the processor and memory that can be used with this motherboard.
Attached to the south bridge are the main slots along the back of the computer where the various other devices get plugged in. in early computers these slots consisted mostly (or entirely) of 8 bit or 16 bit ISA slots with perhaps one slot extended to the VESA standard to take the graphics card. some 386 computers through early pentium systems also had a 32 bit version of the ISA slot called an EISA slot. More modern computers use PCI slots (mostly the 32 bit variety but a longer 64 bit version also exists). They may also have a single AGP slot that is designed to take the graphics card. Finally there will be a number of sockets on the motherboard providing the means whereby the integrated controllers previously mentioned can communicate with their associated devices.
As you can see from the above, the motherboards that you find in different computers can vary quite significantly from one another in terms of the options supplied on the board, the slots available to plug other components into the board, and even in the means and speed at which the various components on the board communicate with one another. The choice of which motherboard that will be used in a given computer can dramatically effect the speed of the computer as well as determining what other parts can be incorporated into the computer.
You build your database using tables and queries testing the architecture and setting keys and indexes. then you want to make the reporting easy to use and pretty to look at and now you are into spending more time developing Microsoft Access forms and reports.
You start building your first form or report and the methods you have available could be one of the following:
- Using the wizard tool which steps you through screen by screen.
- For later versions of Microsoft Access, selecting a pre-defined template by first choosing the data source (table or query).
- Starting from a blank canvas and set properties including the ‘Record Source‘ taking full control.
- Using Access VBA code to build and generate objects dynamically requiring some more advanced knowledge of Microsoft Access including VBA.
But what about the ‘Record Source‘ property itself? which should it be based; a Table, Query or SQL?
The functional use of a form for example may pretty much dictate the route to take. For example, look at the following questions to help you decide:
Q: will the form be used to enter new records only?
A: use a Table because you can guarantee the values in fields will be updated to the source data file for a bound Form.
Q: will the form be used for find records by searching for a value in a field which is based on two or more tables?
A: use a Query which is built from pre-defined joined tables (creating that all important relationship).
Q: will the form need to dynamically pass values with event-driven actions like after entering a value in a control?
A: use SQL statement or a Query which is linked to the form’s control.
There are performance challenges when choosing the right approach but the first thing to remember is that the use of a form or report should be determined first followed by letting Microsoft Access database engine (JET) decide and optimise the database for you which is why most developers will opt and let the Query object be the main and first choice.
SQL statements are used more with VBA code procedures and can be quicker to run as optimisation can be controlled in code especially when using DAO, ADO or ADO.Net using the types of data recordsets they provide. – Another article perhaps
The final thought to consider is how reusable will the data source be? will it serve more than one Access form or report? If so, use a query. If the source is exclusive to the one form or report then embed SQL directly into it as you can pass parameter values, set criteria and calculate within the form or report and save on the extra dedicated query be stored in the first place.
Due to an advanced and highly demandable server provider Windows is providing best class services and tools to the users around the globe. The Windows servers are developed with the motive of making the business operations more efficient and productive and can be easily handled by the administrator.
The Windows server 2008 version is developed for the web and virtual technology as well. The security level of this server is very high and it offers reliability and effective usage measures to the clients. It makes the company’s server management process strong and smooth. It is very useful for the business processes and enhances the server workload. The windows server management has now become very easy. The high demand by the users today has developed it dynamically strong. these families of Windows servers are made for the internet and contain various programs which help the client to develop safe web apps as well as solutions.
In comparison with the previous versions the Windows server management is filled with the advanced security measures and has a high protection against software non functioning issues. It also prohibits the unauthorized access to the server and other network data. another internal function in Windows server 2008 checks if any computer is abide by the predefined policies or not. It also has a safety mechanism for the database systems with encryption and read only restrainers.
About the 2008 server version with this the server management becomes lighter allowing only the required tools installation. The user should have the essential knowledge of the shortcut ways to read the system so as to minimize the time and server issues. The server management is a continuous process and it also requires IT support if any network issue happens. The servers need time to time updates and maintenance because usage of the Internet is high in different business organizations. in this case it is very important that the monitoring support provider is quick, reliable and active all the time in order to prevent any hassle causing downtimes.
Some aspects that should be kept in mind while going for an efficient server management services provider like what support features are offered by them. It happens sometimes that very strong servers led to crashes and cause heavy downtime for the business process. Hence it is good to choose a good service package with quick response time for the support tickets assigned. The support engineers should be very professional and able enough to quickly troubleshoot the problems.