CHAPTER 1 INTRODUCTION 1

CHAPTER 1
INTRODUCTION
1.1 PROBLEM STATEMENT
? NoC implementation would be much more feasible and efficient if the
encoding/decoding techniques integrated in it uses less area and power.
? Design of Existing technique is too complex and also consumes more area and
utilizes more power.
? The existing Walsh code technique is replaced with standard basis technique.
1.2 INTRODUCTION TO NOCs:
To meet the growing computation-intensive applications and the needs of low-power,
high-performance systems, the number of computing resources in single-chip has
enormously increased, because current VLSI technology can support such an extensive
integration of transistors. By adding many computing resources such as CPU, DSP,
specific IPs, etc to build a system in System-on-Chip, its interconnection between each
other becomes another challenging issue. In most System-on-Chip applications, a shared
bus interconnection which needs arbitration logic to serialize several bus access requests,
is adopted to communicate with each integrated processing unit because of its low-cost
and simple control characteristics. However, such shared bus interconnection has some
limitation in its scalability because only one master at a time can utilize the bus which
means all the bus accesses should be serialized by the arbitrator. Therefore, in such an
environment where the number of bus requesters is large and their required bandwidth forinterconnection is more than the current bus, some other interconnection methods should
be considered.
Such scalable bandwidth requirement can be satisfied by using on-chip packet-switched
micro-network of interconnects, generally known as Network-on-Chip (NoC) architecture.
The basic idea came from traditional large-scale multi-processors and distributed
computing networks. The scalable and modular nature of NoCs and their support for
efficient on-chip communication lead to NoC-based system implementations. Even though
the current network technologies are well developed and their supporting features are
excellent, their complicated configurations and implementation complexity make it hard to
be adopted as an on-chip interconnection methodology. In order to meet typical SoCs or
multi-core processing environment, basic module of network interconnection like
switching logic, routing algorithm and its packet definition should be light-weighted to
result in easily implemental solutions.
Background:
As the semiconductor processing technology is advanced to sub-nano one, there are
several side effects awaited. One of critical issues is a wiring delay. While the speed of
basic elements such as gate delay becomes much faster, the wiring delay is growing
exponentially as shown in Figure 1 because of the increased capacitance caused by narrow
channel width and increased crosstalk. Therefore, if this trend be sustained, the wiring is
one of the critical issues to be concerned.
In communication between several cores in System-on-Chip (SoC) environment, some
prevailing mechanisms for this purpose are several bus-based architectures and point-to-
point communication methodologies. For simplicity and ease of use, the bus-based
architectures are the most common. However, in bus-based architecture, it has
fundamentally some limitation in bandwidth, i.e. while the number of components attached
to the bus is increased, a physical capacitance on the bus wires grows and as a result its
wiring delay grows even further. To overcome the fundamental limitation of scalability in
bus-based architectures, some advanced bus architectures such as ARM AMBA , OpenCores WISHBONE System-on-Chip (SoC) interconnection , and IBM CoreConnect , are adopted. The Figure 2 illustrates basic structure of ARM AMBA. As shown in Figure
2, most of advanced bus architectures adopt a hierarchical structure to obtain scalable
communication throughput and partition communication domains into several group of
communication layers depending on bandwidth requirement such as high-performance,
low-performance.
Another approach to exceed such a limitation of communication and overcome such an
enormous wiring delay in future technology is to adopt network-like interconnections
which is called Network-on-Chip (NoC) architecture. Basic concept of such kind of
interconnections is from the modern computer network evolution as mentioned before. By
applying network-like communication which inserts some routers in-between each
communication object, the required wiring can be shortened. Therefore, the switch-based
interconnection mechanism provides a lot of scalability and freedom from the limitation of
complex wiring. Replacement of SoC busses by NoCs will follow the same path of data
communications when the economics prove that the NoC either reduces SoC
manufacturing cost, SoC time to market, SoC time to volume, and SoC design risk or
increases SoC performance. According to , the NoC approach has a clear advantage overtraditional busses and most notably system throughput. And hierarchies of crossbars or
multilayered busses have characteristics somewhere in between traditional busses and
NoC, however they still fall far short of the NoC with respect to performance and
complexity.
The success of the NoC design depends on the research of the interfaces between
processing elements of NoC and interconnection fabric. The interconnection of a SoC
established procedures has some weak points in those respects of slow bus response time,
energy limitation, scalability problem and bandwidth limitation. Bus interconnection
composed of a large number of components in a network interface can cause slow
interface time though the influence of sharing the bus. In addition the interconnection has a
defect that power consumption is high on the score of connecting all objects in the
communication. Moreover it is impossible to increase the number of connection of the
elements infinitely by reason of the limitation of bandwidth in a bus. As a consequence,
the performance of the NoC design relies greatly on the interconnection paradigm.
Though the network technology in computer network is already well developed, it is
almost impossible to apply to a chip-level intercommunication environment without any
modification or reduction. To be eligible for NoC architecture, the basic functionality
should be simple and light-weighted because the implemented component of NoC
architecture should be small enough to be a basic component constructing a SoC. Even
though the basic functionality should be simple, it also satisfies the basic requirement in
general communication. On the other hand, to apply the prevailing mobile environment, it
should be low-powered. In order to be low powered one has to consider many parameters
such as clock rate,operating voltage, and power management scheme.Here this paper by JIAN WANG and YUBAI LI compares the two methods of
encoding/decoding and gives a summary on the same.
The first step is to encode original bits with a spreading code (an XOR operation in the
WB encoder and an AND operation in the SB encoder). Since the chip frequency is equal
to the clock frequency, the number of clock cycles spent on spreading is related to the
length of spreading codes. Because we use standard basis in the SB scheme, only p clock
cycles are required to finish the spreading operation for p senders. However, for the WB
scheme, a q chip Walsh code is required to spread the original data bits for p senders. As
mentioned in Section I, q=2( logp2 +1)?(p+1) .
In the following steps, both methods need sum, extract, and accumulate operations, and
these operations are realized using a multibit adder (an XOR gate), a demux logic module
(an AND gate), and two multibit accumulators (one single accumulator) in the WB (SB)
method. Since these operations work as a pipeline, only one clock cycle is required in each
operation. Moreover, the WB scheme needs additional comparator that spends one more
clock cycle (corresponding to the last line of Table II).
Therefore, the total logic cost of the SB method is lesser than that of the WB method, since
each operation needs less logical resources. Besides, the total latency of the SB method is
(p+3 ) clock cycles. It is always lower than (q+4 ) clock cycles, which is the latency of the WB.Scaling:
The CN (CDMA NoC) can be scaled to different network sizes using two basic methods,
as shown in Fig. In the direct scaling method, the length of orthogonal code will increase
with the number of PEs and thus is more suitable for small-size NoCs (e.g., CDMA NoC
with several PEs . In contrast, the cluster-based scaling method, by which each cluster has
several PEs and clusters are connected with each other, can be used to scale the network
hierarchically to any required size .
2) Topology:
Although a CDMA node cluster may be limited in the star topology, any other topologies
can also be obtained by using the cluster-based scaling method. For example, Kim et al.
developed a hierarchical star topology, Lee and Sobelman developed a mesh topology,
and so on.
3) Routing:
There exist various incremental and global routing schemes for CDMA NoCs. Consider an
incremental routing, where the routing schemes are related to the packet formats. In
general, the packet header contains the destination PE address. The source CN checks the
destination address, determines the next-hop CN or PE, and allocates a corresponding
spread code for the packet encoding and decoding to reach the right output port. The next-
hop CN continues the process until the destination PE is reached. More details and other
routing schemes can be found .
4) Traffic Patterns:
For the CDMA NoC, the influence of traffic patterns has been discussed before and some
real applications have also been mapped onto the CDMA NoCs to show their advantages.
CHAPTER 3
NETWORK on CHIP vs SYSTEM on CHIP:
Everyday cyber security professionals go to work without any idea about the identity and
probable actions of their adversaries. In information security, just as on the military
battlefield, if you do not understand the motivations, intentions and competencies of your
opponents, then you cannot understand the risks to your enterprise or focus on your
defenses.
Even after all the recent data breaches and hacking incidents, many people, companies and
organizations still disregard major security protocols and fail to understand that cyber
security is a discipline where cyber criminals and hacktivists are always a step ahead.
There are several ways by which a company or an organization defends against a cyber
attack. Many companies have adopted the “monitor and response” strategy. This strategy
recognizes that simply a signature-based defense won’t be effective against sophisticated
targeted attacks. This generally takes place in a Security Operations Center (SOC) or a
Network Operations Center (NOC). In most organizations the SOC and NOC run together,
but separately.
There are some similarities between the role of the Network Operation Center (NOC) and
Security Operation Center (SOC); however, often this leads to the mistaken idea that one
can easily handle the other’s duties. This couldn’t be further from the truth.
So why can’t the NOC just handle both functions? Why should each SOC and NOC work
separately, but operate in conjunction with one another?
First, the roles of SOC and NOC are subtly but fundamentally different. While it is true
that both SOC and NOC are responsible for identifying, investigating, prioritizing,
escalating and resolving issues, the types of issues and the impact they have are
considerably different.
The NOC is responsible for handling incidents that affect performance or availability
while the SOC handles those incidents that affect the security of information assets.

Both SOC and NOC are involved in risk management and risk mitigation; however, the
way they accomplish this goal is different.
The NOC’s job is to meet service level agreements (SLAs) and manage incidents in a way
that reduces downtime. It focuses on availability and performance.
The SOC, however, is in charge of protecting intellectual property and sensitive customer
data – a focus on security.
While both of these things are critically important to any organization, combining the SOC
and NOC into one entity and having them each handle the other’s duties can spell disaster
– because their approaches are so different.
Another reason the NOC and SOC should not be combined is because their skill sets are
different.
A NOC analyst must be proficient in network, application and systems engineering, while
SOC analysts require security-engineering skills.
Last but not least, the very nature of the adversaries that each group tackles is different.
The SOC focuses on “intelligent adversaries” while the NOC deals with naturally
occurring system events.
These are completely different directions, which result in contrasting solutions.
Consequently, both SOC and NOC are needed to work side-by-side but in conjunction
with one another.
Our cyber security career track can help you get into the highly in-demand cyber security
field. Network Operations Centers (NOCs), mostly known because they are a fairly
common service today, are usually based on facilities with large screens or video walls
with workstations for operators and analysts, meeting rooms, coffee rooms, break rooms.In
short, an area suitable for continuous of the activity in telecommunication networks,
service systems, TV broadcasts, etc. The main goal is mainly to monitor the “availability”
of networks and services. Perhaps the SOCs (Security Operation Centers) are not so well-
known within the operation areas, and while their physical aspect may be very similar to
NOCs, their goals are quite different, mainly because they are oriented to protect theSecurity (Confidentiality, Integrity and Availability) of networks and services. They must
be able to detect any malicious activity present in the network through sensors installed in
different platforms; they must inform, manage and respond to different alarms. But in the
family of operations, there is one more brother, known as CyberSOC or also as Advanced
SOC. In this webinar, we will explain the general concepts and the main differences
between these “three brothers”, analyzing not only their objectives and functions, but also
their organization and the tools they need to function and deliver value-added services. We
also saw what is expected of each of them as a client, and can we improve as a companyCHAPTER 4
WHY CDMA?
4.1 INTRODUCTION TO CDMA:
Code Division Multiple Access (CDMA) is a digital cellular technology used for mobile
communication. CDMA is the base on which access methods such as cdma One,
CDMA2000, and WCDMA are built. CDMA cellular systems are deemed superior to
FDMA and TDMA, which is why CDMA plays a critical role in building efficient, robust,
and secure radio communication systems.
A Simple Analogy
Let’s take a simple analogy to understand the concept of CDMA. Assume we have a few
students gathered in a classroom who would like to talk to each other simultaneously.
Nothing would be audible if everyone starts speaking at the same time. Either they must
take turns to speak or use different languages to communicate.
The second option is quite similar to CDMA — students speaking the same language can
understand each other, while other languages are perceived as noise and rejected.
Similarly, in radio CDMA, each group of users is given a shared code. Many codes occupy
the same channel, but only those users associated with a particular code can communicate.
Salient Features of CDMA
CDMA, which is based on the spread spectrum technique, has following salient features ?
In CDMA, every channel uses the full available spectrum.
Individual conversations are encoded with a pseudo-random digital sequence and then