Tags

, , ,

funny-cats-2-225x300

Since I am starting out my SP studies again I figured I would do a post on basic MPLS VPNs with VRFs and just use static routes. These are some of the base fundamentals in a MPLS VPN deployment, and from here you can quickly build out and add complexities if you wish. What I will cover here is VRF, RD, LDP, and MP BGP configurations and I will start by defining and explaining the technologies.
Virtual Routing and Forwarding, or VRF for short, is a technology that allows a router to have multiple instances of a routing table at the same time. These routing tables co-exist on the router, but yet are segmented and independent of each other. An analogy would be Server virtualization. You can have a single server running VMWare, and then under that have multiple instances of Windows running independently of each other. You can configure those servers to talk to each other, or completely isolate them away and they will have no knowledge of each other.
A Route Distinguisher, or RD, is a way to identify a VPN route in an MPLS network. What this does is add an 8-byte value to an IPv4 prefix to create a VPNv4 prefix, typically referred to as an VPN-IPv4 address. Since each customer is assinged a unique RD, their addresses are guaranteed to be unique. So if you have Customer A that has a 10/8 address space and Customer B that also has a 10/8 address space, the addition of the RD brings uniqueness to each of them. A RD looks nomally looks like 1:100

Now onto the fun stuff – Multi-protocol BGP with regards to VPNv4 and VRF. As you are probably already aware BGP can be a very powerful protocol – and MPLS VPNs take advantage of this. Right from RFC4577: Many Service Providers offer Virtual Private Network (VPN) services to their customers, using a technique in which customer edge routers (CE routers) are routing peers of provider edge routers (PE routers). The Border Gateway Protocol (BGP) is used to distribute the customers routes across the providers IP backbone network, and Multiprotocol Label Switching (MPLS) is used to tunnel customer packets across the providers backbone. This is known as a “BGP/MPLS IP VPN”. Now the VRF part of BGP comes into play as it provides the ability to redistribute between BGP and the defined customer VRF. Your PE to CE relationship can be any routing protocol you choose – RIP, OSPF, EIGRP, Static, BGP, etc and – based on the RD defined – you have the ability to redistribute the routes between your protocol of choice and the BGP VRF.
So, enough of the typing and reading – lets get onto the configuration example. Below is a diagram of the network that I will build out and test. I have basically 3 networks here: VRF Green, VRF Blue, and the yellow Service Provider backbone network. For the PE to CE protocol we will configure static routes, keeping things simple. For the Service Provider network we will use OSPF Area 0, again keeping things simple. For VRF Green we will use RD 1:14 and for VRF Blue we will use RD 2:36. The Service provider network will also run MPLS LDP and we will have a MPBGP peering between the loopback of R2 and R5.
 

MPLS VRF

First, lets build the SP core network R2, R7, R8, and R5 and configure OSPF. We will hold off on the MPLS portion for now, just the basic OSPF network to make sure that we have full connectivity.
R2:
Lets get the interface configured
Rack1R2(config)# interface GigabitEthernet0/0
Rack1R2(config-if)# ip address 220.61.27.2 255.255.255.0
Rack1R2(config-if)# no shut
Create out loopback addresses
Rack1R2(config)# interface Loopback0
Rack1R2(config-if)# ip address 220.61.253.2 255.255.255.255
…and our OSPF processes and area
Rack1R2(config)# router ospf 1
Rack1R2(config-router)# network 220.61.253.2 0.0.0.0 area 0
Rack1R2(config-router)#
network 220.61.27.0 0.0.0.255 area 0
R7:
Rack1R7(config)# interface GigabitEthernet0/0
Rack1R7(config-if)# ip address 220.61.27.7 255.255.255.0
Rack1R7(config-if)# no shut
Rack1R7(config)# interface GigabitEthernet0/1
Rack1R7(config-if)# ip address 220.61.78.7 255.255.255.0
Rack1R7(config-if)# no shut
Rack1R7(config)# router ospf 1
Rack1R7(config-router)# network 220.61.27.0 0.0.0.255 area 0
Rack1R7(config-router)# network 220.61.78.0 0.0.0.255 area 0

R8:
Rack1R8(config)# interface FastEthernet0/0
Rack1R8(config-if)# ip address 220.61.85.8 255.255.255.0
Rack1R8(config-if)# no shut
Rack1R8(config)# interface FastEthernet0/1
Rack1R8(config-if)# ip address 220.61.78.8 255.255.255.0
Rack1R8(config-if)# no shut
Rack1R8(config)# router ospf 1
Rack1R8(config-router)# network 220.61.78.0 0.0.0.255 area 0
Rack1R8(config-router)# network 220.61.85.0 0.0.0.255 area 0

R5:
Rack1R5(config)# interface GigabitEthernet0/0
Rack1R5(config-if)# ip address 220.61.85.5 255.255.255.0
Rack1R5(config-if)# no shut
Rack1R5(config)# interface Loopback0
Rack1R5(config-if)# ip address 220.61.253.5 255.255.255.255
Rack1R5(config)# router ospf 1
Rack1R5(config-router)# network 220.61.85.0 0.0.0.255 area 0
Rack1R5(config-router)# network 220.61.253.5 0.0.0.0 area 0

Ok, that should give us basic connectivity. Lets test with a PING from R5 loopback to R2 loopback address:
Rack1R5# ping 220.61.253.2 so l0
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 220.61.253.2, timeout is 2 seconds:
Packet sent with a source address of 220.61.253.5
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/4 ms
Rack1R5#

Cool, now that we have the basic SP backbone connectivity we can add the MPLS to the picture. All that we have to do is enable MPLS IP on the interfaces. But as a safety measure, it is smart to set the protocol to LDP (Label Distribution Protocol) to be sure.
R2:
We need to tell the router we want to use LDP
Rack1R2(config)# mpls lab pro ldp
And then enable MPLS IP on the associated interfaces
Rack1R2(config)# int g0/0
Rack1R2(config-if)# mpls ip

R7:
Rack1R7(config)# mpls lab pro ldp
Rack1R7(config)# int g0/0
Rack1R7(config-if)# mpls ip
Rack1R7(config-if)# int g0/1
Rack1R7(config-if)# mpls ip
See that, we have LDP neighbored up with R2!
*Sep 9 01:45:08.042: %LDP-5-NBRCHG: LDP Neighbor 220.61.253.2:0 (1) is UP
Rack1R7(config-if)# mpls ip

R8:
Rack1R8(config)# mpls lab pro ldp
Rack1R8(config)# int f0/0
Rack1R8(config-if)# mpls ip
Rack1R8(config-if)# int f0/1
Rack1R8(config-if)# mpls ip
…and R8 neighbored up with R7.
*Mar 1 05:56:53.982: %LDP-5-NBRCHG: LDP Neighbor 220.61.78.7:0 is UP
Rack1R8(config-if)#

R5:
Rack1R5(config)# mpls lab pro ldp
Rack1R5(config)# int g0/0
Rack1R5(config-if)# mpls ip
And finally R5 neighbored up with R8
*Sep 9 03:34:52.726: %LDP-5-NBRCHG: LDP Neighbor 220.61.85.8:0 (1) is UP

Lets take a quick look at R8 and its LDP neighbors:
Rack1R8#sh mpls ldp neighbor
Peer LDP Ident: 220.61.78.7:0; Local LDP Ident 220.61.85.8:0
TCP connection: 220.61.78.7.646 – 220.61.85.8.31158
State: Oper; Msgs sent/rcvd: 1039/1039; Downstream
Up time: 15:03:29
LDP discovery sources:
FastEthernet0/1, Src IP addr: 220.61.78.7
Addresses bound to peer LDP Ident:
220.61.27.7 220.61.78.7
Peer LDP Ident: 220.61.253.5:0; Local LDP Ident 220.61.85.8:0
TCP connection: 220.61.253.5.33668 – 220.61.85.8.646
State: Oper; Msgs sent/rcvd: 1041/1037; Downstream
Up time: 15:03:04
LDP discovery sources:
FastEthernet0/0, Src IP addr: 220.61.85.5
Addresses bound to peer LDP Ident:
220.61.85.5 220.61.253.5
Rack1R8#

We can see that R8 is neighbored up and has all the addresses bound to that neighbor. We are good to go! You might also notice that the only Addresses bound to the peer are the ones that are not in a VRF.

Now that we have LDP configured and enabled, lets look at the mpls forwarding table to see if we have labels.
Rack1R5#sh mpls forwarding-table
Local Outgoing Prefix Bytes Label Outgoing Next Hop
Label Label or VC or Tunnel Id Switched interface
16 18 220.61.27.0/24 0 Gi0/0 220.61.85.8
17 Pop Label 220.61.78.0/24 0 Gi0/0 220.61.85.8
18 17 220.61.253.2/32 0 Gi0/0 220.61.85.8
Rack1R5#

Looking good! Now, lets get the PE to CE configuration done for the Green VRF.
R2:
Rack1R2#conf t
Enter configuration commands, one per line. End with CNTL/Z.
First we need to define the VRF, in this case Green.
Rack1R2(config)#ip vrf Green
Now we can assign our Route Distinguisher. This is usually the same for all instances of the VRF across the Provider network.
Rack1R2(config-vrf)#rd 1:14
We will now tell the VRF to export and import anything with a tag of 1:14. Using the both command implies import/export.
Rack1R2(config-vrf)#route-target both 1:14
Now onto the Serial interface. First we configure the interface for Frame Relay.
Rack1R2(config-vrf)#int ser 4/0
Rack1R2(config-if)#enc frame
Rack1R2(config-if)#no frame inv
Rack1R2(config-if)#no arp frame
Now we can configure the sub-interface for point-to-point connections.
Rack1R2(config)#int ser 4/0.21 p
Lets place the interface into the appropriate VRF
Rack1R2(config-subif)#ip vrf forwarding Green
…and configure the IP and associated DLCI
Rack1R2(config-subif)#ip add 192.168.21.2 255.255.255.0
Rack1R2(config-subif)#frame-relay interface-dlci 201
Rack1R2(config-fr-dlci)#int ser 4/0
Rack1R2(config-if)#no shut

Since R1 is not part of the PE, there are no VRFs necessary. All that we need to do is configure it like we normally would.
R1:
Rack1R1(config)#int ser 4/0
Rack1R1(config-if)#encapsulation frame-relay
Rack1R1(config-if)#no frame inv
Rack1R1(config-if)#no arp frame
Rack1R1(config-if)#int ser 4/0.1 p
Rack1R1(config-subif)#ip add 192.168.21.1 255.255.255.0
Rack1R1(config-subif)#frame-relay interface-dlci 102
Rack1R1(config-fr-dlci)#int ser 4/0
Rack1R1(config-if)#no shut
Rack1R1(config-if)#int l0
Rack1R1(config-if)#ip add 192.168.253.1 255.255.255.255
Since we are not running a routing protocol here, we will need to configure a static route pointing towards R2.
Rack1R1(config)#ip route 0.0.0.0 0.0.0.0 192.168.21.2

Lets test a PING from R2 to R1’s serial address:
Rack1R2#p 192.168.21.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.21.1, timeout is 2 seconds:
…..
Success rate is 0 percent (0/5)
Rack1R2#

Nope, it failed – why? Why – because with VRF we need to tell it to ping from within the VRF. Lets try that again and tell it to use the Green VRF
Rack1R2#ping vrf Green 192.168.21.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.21.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 56/67/104 ms
Rack1R2#

That worked! Now lets add a static route on R2 for R1’s loopback interface.
Remember we are dealing with VRFs here, so we need to place that route in the VRF
Rack1R2(config)#ip route vrf Green 192.168.253.1 255.255.255.255 192.168.21.1
Now lets ping R1 loopback from R2 Green VRF:
Rack1R2#ping vrf Green 192.168.253.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.253.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 56/56/60 ms
Rack1R2#

There we go! Now onto the rest of the routers for the Green VRF.
R5:
Rack1R5(config)#ip vrf Green
Rack1R5(config-vrf)#rd 1:14
Rack1R5(config-vrf)#route-target both 1:14
Rack1R5(config-vrf)#int ser 0/0/0
Rack1R5(config-if)#enc fra
Rack1R5(config-if)#no frame inv
Rack1R5(config-if)#no arp fram
Rack1R5(config-if)#int ser 0/0/0.1 p
Rack1R5(config-subif)#ip add 192.168.54.5 255.255.255.0
Rack1R5(config-subif)#ip vrf forwarding Green
When you place an interface into a VRF, any existing IP address is removed. You will need to re-enter the IP address again.
% Interface Serial0/0/0.1 IP address 192.168.54.5 removed due to enabling VRF Green
Rack1R5(config-subif)#ip add 192.168.54.5 255.255.255.0
Rack1R5(config-subif)#frame-relay interface-dlci 504
Rack1R5(config-fr-dlci)#int ser 0/0/0
Rack1R5(config-if)#no shut

Just like R1, there is no VRF configuration on R4. Just a normal router.
R4:
Rack1R4(config)#int ser 0/0/0
Rack1R4(config-if)#encapsulation frame-relay
Rack1R4(config-if)#no frame inv
Rack1R4(config-if)#no arp frame
Rack1R4(config-if)#int ser 0/0/0.1 p
Rack1R4(config-subif)#ip add 192.168.54.4 255.255.255.0
Rack1R4(config-subif)# frame-relay interface-dlci 405
Rack1R4(config-if)#int ser 0/0/0
Rack1R4(config-if)#no shut
Rack1R4(config-if)#int l0
Rack1R4(config-if)#ip add 192.168.253.4 255.255.255.255
Rack1R4(config)#ip route 0.0.0.0 0.0.0.0 192.168.54.5

Lets test a PING from R5 to R4’s loopback address, but first we will have to add the static route in the VRF for R4’s loopback
Rack1R5(config)#ip route vrf Green 192.168.253.4 255.255.255.255 192.168.54.4
Rack1R5#
ping vrf Green 192.168.253.4
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.253.4, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 56/56/60 ms
Rack1R5#

Now onto the Blue VRF. The commands are the same, just a different RD and VRF.
First R2 and R3
R2:
Rack1R2(config)#ip vrf Blue
Rack1R2(config-vrf)#rd 2:36
Rack1R2(config-vrf)#route-target both 2:36
Rack1R2(config)#int ser 4/0.2 p
Rack1R2(config-subif)#ip vrf forwarding Blue
Rack1R2(config-subif)#ip add 192.168.23.2 255.255.255.0
Rack1R2(config-subif)#frame-relay interface-dlci 203
Rack1R2(config-fr-dlci)#exit
Rack1R2(config-subif)#exit
We will configure the necessary static route for the Blue VRF as well.
Rack1R2(config)#ip route vrf Blue 192.168.253.3 255.255.255.255 192.168.23.3

R3:
Rack1R3(config)#int ser 0/0/0
Rack1R3(config-if)#encapsulation frame-relay
Rack1R3(config-if)#no frame inv
Rack1R3(config-if)#no arp frame
Rack1R3(config-if)#int ser 0/0/0.1 p
Rack1R3(config-subif)#ip add 192.168.23.3 255.255.255.0
Rack1R3(config-subif)#frame-relay interface-dlci 302
Rack1R3(config)#int ser 0/0/0
Rack1R3(config-if)#no shut
Rack1R3(config)#int l0
Rack1R3(config-if)#ip add 192.168.253.3 255.255.255.255
Rack1R3(config)#ip route 0.0.0.0 0.0.0.0 192.168.23.2

Now to test PING from R2 to R3 loopback:
Rack1R2#ping vrf Blue 192.168.253.3
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.253.3, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 56/56/60 ms
Rack1R2#

Success!!!
Onto R5 and R6!
R5:
Rack1R5(config)#ip vrf Blue
Rack1R5(config-vrf)#rd 2:36
Rack1R5(config-vrf)#route-target both 2:36
Rack1R5(config-vrf)#int ser 0/0/0.2 p
Rack1R5(config-subif)#ip vrf forwarding Blue
Rack1R5(config-subif)#ip add 192.168.56.5 255.255.255.0
Rack1R5(config-subif)#frame-relay interface-dlci 506
Rack1R5(config-fr-dlci)#exit
Rack1R5(config)#
ip route vrf Blue 192.168.253.6 255.255.255.255 192.168.56.6

R6:
Rack1R6(config)#int ser 0/0/0
Rack1R6(config-if)#encapsulation frame-relay
Rack1R6(config-if)#no frame inv
Rack1R6(config-if)#no arp frame
Rack1R6(config-if)#int ser 0/0/0.1 p
Rack1R6(config-subif)#ip add 192.168.56.6 255.255.255.0
Rack1R6(config-subif)#frame-relay interface-dlci 605
Rack1R6(config)#int ser 0/0/0
Rack1R6(config-if)#no shut
Rack1R6(config-if)#int l0
Rack1R6(config-if)#ip add 192.168.253.6 255.255.255.255
Rack1R6(config-if)#exit
Rack1R6(config)#ip route 0.0.0.0 0.0.0.0 192.168.56.5

Lets test a ping from R6 to R5
Rack1R6#p 192.168.56.5
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.56.5, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 56/56/60 ms
Rack1R6#

Now lets test R5 to R6 loopback ping
Rack1R5#ping vrf Blue 192.168.253.6
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.253.6, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 56/56/60 ms
Rack1R5#

Good to go!
Now before I go on lets test PING from R1 to R3, R4, and R6. We have configured a default route on each of the CE devices, so this should work – right?
Rack1R1#ping 192.168.253.4 t 1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.253.4, timeout is 1 seconds:
U.U.U
Success rate is 0 percent (0/5)
Rack1R1#
ping 192.168.253.3 t 1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.253.3, timeout is 1 seconds:
U.U.U
Success rate is 0 percent (0/5)
Rack1R1#
ping 192.168.253.6 t 1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.253.6, timeout is 1 seconds:
U.U.U
Success rate is 0 percent (0/5)
Rack1R1#

Nope, nobody can ping anyone, curious. This is because even though we have created VRFs with the same RD we have not configured any routing between the two PE routers, R2 and R5. This is where MBGP comes into play. Lets configure the basic BGP peering via loopbacks between R2 and R5.
R2:
Rack1R2(config)#router bgp 1
Rack1R2(config-router)#nei 220.61.253.5 remote-as 1
Rack1R2(config-router)#
nei 220.61.253.5 up l0

R5:
Rack1R5(config)#router bgp 1
Rack1R5(config-router)#nei 220.61.253.2 remote-as 1
Rack1R5(config-router)#nei 220.61.253.2 up l0
There, we have BGP neighbors up.
*Sep 9 14:02:26.955: %BGP-5-ADJCHANGE: neighbor 220.61.253.2 Up

Ok, basic BGP peering is up, now we can create the VPNv4 connection between R2 and R5 so that we can pass the Route Targets (RT) between the peers. We will also create the associated ipv4 address-family VRFs and redistribute the static and connected routes.
R2:
Rack1R2(config)#router bgp 1
Now we can create the VPNv4 connection between R2 and R5 via the address-family commands.
Rack1R2(config-router)#address-family vpnv4
We need to activate the neighbor.
Rack1R2(config-router-af)#neighbor 220.61.253.5 activate
Rack1R2(config-router-af)#nei 220.61.253.5 next-hop-self
And tell the route to send the RT tags along as well as any other communities. By default the router may add send-community extended, but if you need to also send standard later on you will need to change it. It is just easier to enable the sending of standard and extended.
Rack1R2(config-router-af)#neighbor 220.61.253.5 send-community both
Rack1R2(config-router-af)#exit
Now we can work on the vrf address family, Green first
Rack1R2(config-router)#address-family ipv4 vrf Green
We will tell BGP to redistribute connected routes in the Green VRF
Rack1R2(config-router-af)#redistribute connected
As well as any static routes
Rack1R2(config-router-af)#redistribute static
Rack1R2(config-router-af)#exit
And now the same for the Blue VRF
Rack1R2(config-router)#address-family ipv4 vrf Blue
Rack1R2(config-router-af)#red con
Rack1R2(config-router-af)#red st

R5:
Same config here basically.
Rack1R5(config)#router bgp 1
Rack1R5(config-router)#address-family vpnv4
Rack1R5(config-router-af)#neighbor 220.61.253.2 activate
Rack1R5(config-router-af)#nei 220.61.253.2 next-hop-self
Rack1R5(config-router-af)#nei 220.61.253.2 send-community both
Rack1R5(config-router-af)#exit
Rack1R5(config-router)#address-family ipv4 vrf Green
Rack1R5(config-router-af)#red connected
Rack1R5(config-router-af)#red static
Rack1R5(config-router-af)#exit
Rack1R5(config-router)#address-family ipv4 vrf Blue
Rack1R5(config-router-af)#red connected
Rack1R5(config-router-af)#red static

Once that is configured you can now look at the VPNV4 topology and you should see all the associated routes with their Route Distinguishers:
Rack1R5#sh ip bgp vpnv4 all
BGP table version is 17, local router ID is 220.61.253.5
Status codes: s suppressed, d damped, h history, * valid, > best, i – internal,
r RIB-failure, S Stale
Origin codes: i – IGP, e – EGP, ? – incomplete

Network Next Hop Metric LocPrf Weight Path
Route Distinguisher: 1:14 (default for vrf Green)
*>i192.168.21.0 220.61.253.2 0 100 0 ?
*> 192.168.54.0 0.0.0.0 0 32768 ?
*>i192.168.253.1/32 220.61.253.2 0 100 0 ?
*> 192.168.253.4/32 192.168.54.4 0 32768 ?
Route Distinguisher: 2:36 (default for vrf Blue)
*>i192.168.23.0 220.61.253.2 0 100 0 ?
*> 192.168.56.0 0.0.0.0 0 32768 ?
*>i192.168.253.3/32 220.61.253.2 0 100 0 ?
*> 192.168.253.6/32 192.168.56.6 0 32768 ?
Rack1R5#

Lets test a PING from R1 to R4 loopback
Rack1R1#p 192.168.253.4 so l0
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.253.4, timeout is 2 seconds:
Packet sent with a source address of 192.168.253.1
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 112/114/116 ms
Rack1R1#

Lets do a quick TRACEROUTE:
Rack1R1#traceroute 192.168.253.4
Type escape sequence to abort.
Tracing the route to 192.168.253.4

1 192.168.21.2 28 msec 28 msec 28 msec
2 220.61.27.7 [MPLS: Labels 18/20 Exp 0] 152 msec 148 msec 152 msec
3 220.61.78.8 [MPLS: Labels 16/20 Exp 0] 152 msec 148 msec 152 msec
4 192.168.54.5 [MPLS: Label 20 Exp 0] 56 msec 56 msec 56 msec
5 192.168.54.4 56 msec * 56 msec
Rack1R1#

There we can see the traffic traversing the MPLS network as expected.
Lets test PING from R3 to R6:
Rack1R3#ping 192.168.253.6 so l0
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.253.6, timeout is 2 seconds:
Packet sent with a source address of 192.168.253.3
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 112/114/116 ms

Rack1R3#traceroute 192.168.253.6
Type escape sequence to abort.
Tracing the route to 192.168.253.6

1 192.168.23.2 28 msec 28 msec 28 msec
2 220.61.27.7 [MPLS: Labels 18/22 Exp 0] 152 msec 148 msec 152 msec
3 220.61.78.8 [MPLS: Labels 16/22 Exp 0] 148 msec 152 msec 148 msec
4 192.168.56.5 [MPLS: Label 22 Exp 0] 60 msec 56 msec 56 msec
5 192.168.56.6 56 msec * 52 msec
Rack1R3#

Now what about R1 to R3:
Rack1R1#p 192.168.253.3 so l0 t 1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.253.3, timeout is 1 seconds:
Packet sent with a source address of 192.168.253.1
U.U.U
Success rate is 0 percent (0/5)
Rack1R1#

Nope, and that is what we do expect to see. R3 and R6 are in the Green VRF and not the Blue. They are separate, so we have full connectivity between the appropriate VRFs while the other VRFs are not reachable. Basically below is a diagram of what is happening. The Green VRF is kept separate the entire time from the Blue VRF. Even though the transit the same PE routers (R2 and R5), the VRF and RTs are what keep the routes separate and unique.

VRF RD

One last thing to look at, the routing table on the SP backbone network, here R8.
Rack1R8#sh ip route
Codes: C – connected, S – static, R – RIP, M – mobile, B – BGP
D – EIGRP, EX – EIGRP external, O – OSPF, IA – OSPF inter area
N1 – OSPF NSSA external type 1, N2 – OSPF NSSA external type 2
E1 – OSPF external type 1, E2 – OSPF external type 2
i – IS-IS, su – IS-IS summary, L1 – IS-IS level-1, L2 – IS-IS level-2
ia – IS-IS inter area, * – candidate default, U – per-user static route
o – ODR, P – periodic downloaded static route

Gateway of last resort is not set
C 220.61.85.0/24 is directly connected, FastEthernet0/0
220.61.253.0/32 is subnetted, 2 subnets
O 220.61.253.5 [110/11] via 220.61.85.5, 22:06:40, FastEthernet0/0
O 220.61.253.2 [110/21] via 220.61.78.7, 22:06:40, FastEthernet0/1
C 220.61.78.0/24 is directly connected, FastEthernet0/1
O 220.61.27.0/24 [110/20] via 220.61.78.7, 22:06:40, FastEthernet0/1
Rack1R8#

As you can see they have no routes to the customer network, none at all. The customer network is completely separated from their network.