Jump to content

ATI Framebuffer development


Slice
 Share

466 posts in this topic

Recommended Posts

I am interested in this project also as you know I have a newer hd radeon but am willing to help out in any way I can. I know with callisto and agp in tiger I have good speed and benches I am hoping this is the missing key to finish support for ATI Radeons. I know I have a framebuffer working already but a homebrew would suite my needs better I believe as it will be designed to work with all our cards.

Link to comment
Share on other sites

Wandering on the web, I found latests "ATI Displays" tools (4.5.9 last updated for G5 and X1900 iirc).

It adds a special panel in preferences, as usual, it comes with a kext for TVout kext, but - more interesting - it seems this time it's universal.

I had tried 4.5.7 not a long time ago and managed to get the tools to "see" my radeon 9250, it needs to have correct values injected (especially the board codename). However most features were greyed out because of missing intel kexts.

Maybe this time TV out is possible.

I quickly tried again with this new one and my X1600 but haven't managed, however I haven't bothered much about it. I upload it here, maybe you'll manage to get something off it.

I'll try again with my 9250 when I have some time to waste.

 

You can get it here

Link to comment
Share on other sites

A little progress for my X1400 mobility!

I subclassed IONDRVFramebuffer and add some code to cscGetNextResolution, now some more resolutions appeared in display preferences.

post-75935-1212585489_thumb.png

I'm using manually injected EDID to get these resolutions. The problem is that the native resolution:1400x1050 is not shown. Did not figure out the reason yet. And need add code to cscSwitchResoltion to really do the job!

 

Edit:

Made a mistake when assigning displayMode ID, now all resolutions that my internal LCD supported show up after fix it.

post-75935-1212607776_thumb.png

Link to comment
Share on other sites

Don't know that much about this, but in Post 50 (by Slice) i see a for i=4 loop, which gets the resolutions from the EDID. And you happen to get 4 resolutions. So my guess is it first passes the 4 lowest resolutions, and the script never gets to 1440, as you do have the common 640,800,1024 and 1280 ones.

 

So maybe changing it to i=5 or 6 might do the job?

 

(feel free to shoot me if this is a stupid answer)

Link to comment
Share on other sites

I'm not using Slice's source code, but rather directly based on apple's source. Actually Slice's kext is not for Radeon card beyond x800.

Also the i=4 loop is just for detailed timing modes which is the max number of this kind of modes in EDID. The 4 modes I observed is from established timing modes and standard timing modes.

 

After fixing the code, the two detailed timing modes: 1400x1050@60Hz and @50Hz are now listed in display preference. But I am still confused by one thing: the rowBytes gaven by boot mode of "1400x1050x32" is 5632, instead of my calculated value 5600 (1400 * 4). While for 1024x768x32 or 1280x1024x32, the two values are the same.

Where comes the extra 8 pixels in case of 1400x1050? Help is needed to clarify this.

Link to comment
Share on other sites

Wandering on the web, I found latests "ATI Displays" tools (4.5.9 last updated for G5 and X1900 iirc).

It adds a special panel in preferences, as usual, it comes with a kext for TVout kext, but - more interesting - it seems this time it's universal.

Good news! I checked it. See picture.

Yes, ATITVOut.kext is universal binary now!!! :angel:

The only problem is to understand how to switch it on.

			<key>IOMatchCategory</key>
		<string>ATITVOut</string>

There is a real task for our development.

 

 

A little progress for my X1400 mobility!

I subclassed IONDRVFramebuffer and add some code to cscGetNextResolution, now some more resolutions appeared in display preferences.

I'm using manually injected EDID to get these resolutions. The problem is that the native resolution:1400x1050 is not shown. Did not figure out the reason yet. And need add code to cscSwitchResoltion to really do the job!

I agree. But there are 29 other cscXXX commands that might be needed to do the job.

We need to exchange informations about the commands.

 

Actually Slice's kext is not for Radeon card beyond x800.

Sorry! I have Radeon9000 and my kext would work for it. For beyond it is a matter for discuss.

After fixing the code, the two detailed timing modes: 1400x1050@60Hz and @50Hz are now listed in display preference. But I am still confused by one thing: the rowBytes gaven by boot mode of "1400x1050x32" is 5632, instead of my calculated value 5600 (1400 * 4). While for 1024x768x32 or 1280x1024x32, the two values are the same.

Where comes the extra 8 pixels in case of 1400x1050? Help is needed to clarify this.

I am not ready to answer your question. May be you publish your part of the codes where the value is calculated?

 

My results.

I manually inject EDID and got dmesg

ATIFB: videomodes to var

1600 1200 1600 1200 0 0 6172 304 64 46 1 192 3 3

Depth set : 32

1600 1200 1600 1200 0 0 6172 304 64 46 1 192 3 3

hStart = 1664, hEnd = 1856, hTotal = 2160

vStart = 1201, vEnd = 1204, vTotal = 1250

h_total_disp = 0xc7010d hsync_strt_wid = 0x18067d

v_total_disp = 0x4af04e1 vsync_strt_wid = 0x304b0

pixclock = 6172

freq = 16202

But my LCD has mode 1024x768 but not 1600x1200 so I got black screen.

So now I return to the problem of getting good EDID.

I have no EDID in BIOS. I need to get it from I2C but I have no good procedure reliable for my hardware. Searching...

ATIDisplay.png

Link to comment
Share on other sites

Slice, I think you have to enable the TV out using the ATI control panel (the app you shot in your post).

That's what I was explaining before, if the rights strings are correctly injected, the card will be detected and many more options will appear, including those for TV out.

However there must be a "generic way" to enable it but it requires some more coding so maybe testing it this way would be better, before getting into something complicated maybe for nothing.

I'm gonna try again with my 9250, I had managed not long ago to enable this panel.

 

Edit : I'm posting a screen shot to show how it's supposed to look, I did it last time, I didn't have that intel kext yet. Look at the icons on the left.

 

Edit 2 : I managed to bring back the full panel. I tried several codenames, they all enabled it but only "Bugsy" brang all the features. I join the plist I used in the injecter.

I get an error at the launch of the control panel : "A problem was encountered. Unable to connect to the TVout kernel extensions. Some features are unavailable".

I join a new shot of the TV panel, however it's still greyed out since the extensions can't be "connected".

The extension is loaded successfully, maybe a problem with the device path ?

If you look closely, you'll notice I get a kind of echo, what could this mean ? I don't get this usually when only using VESA. Maybe one of my injected strings triggered something.

<dict>
			<key>@0,AAPL,boot-display</key>
			<integer>1</integer>
			<key>@0,ATY,EFIDisplay</key>
			<string>LVDS</string>
			<key>@0,compatible</key>
			<string>ATY,Bugsy</string>
			<key>@0,device_type</key>
			<string>display</string>
			<key>@0,display-dither-support</key>
			<integer>1</integer>
			<key>@0,display-link-component-bits</key>
			<integer>6</integer>
			<key>@0,display-type</key>
			<string>LCD</string>
			<key>@0,inverter-current</key>
			<integer>0</integer>
			<key>@0,name</key>
			<string>ATY,Bugsy_A</string>
			<key>@1,AAPL,boot-display</key>
			<integer>0</integer>
			<key>@1,ATY,EFIDisplay</key>
			<string>CRT2</string>
			<key>@1,compatible</key>
			<string>ATY,Bugsy</string>
			<key>@1,device_type</key>
			<string>display</string>
			<key>@1,display-dither-support</key>
			<integer>0</integer>
			<key>@1,display-link-component-bits</key>
			<integer>6</integer>
			<key>@1,inverter-current</key>
			<integer>1</integer>
			<key>@1,name</key>
			<string>ATY,Bugsy_B</string>
			<key>AAPL,backlight-control</key>
			<integer>1</integer>
			<key>AAPL00,Coherency</key>
			<integer>2</integer>
			<key>ATY,Copyright</key>
			<string>Copyright ATI Technologies Inc. 2005</string>
			<key>ATY,DeviceID</key>
			<integer>22880</integer>
			<key>ATY,EFIVersion</key>
			<string>1.3</string>
			<key>ATY,VendorID</key>
			<integer>4098</integer>
			<key>DFP1,EDID</key>
			<data>
			AP///////wAebQEAAQEBAQAQAQOAQCWWCs90o1dMsCMJ
			SEwvzgAxQEVAYUABAQEBAQEBAQEBZiFQsFEAGzBAcDYA
			xI4hAAAeDh8AgFEAHjBAgDcAP0MhAAAcAAAA/QA4Sx88
			CQAKICAgICAgAAAA/ABEQTcwWAogICAgICAgANI=
			</data>
			<key>LVDS,EDID</key>
			<data>
			AP///////wAebQEAAQEBAQAQAQOAQCWWCs90o1dMsCMJ
			SEwvzgAxQEVAYUABAQEBAQEBAQEBZiFQsFEAGzBAcDYA
			xI4hAAAeDh8AgFEAHjBAgDcAP0MhAAAcAAAA/QA4Sx88
			CQAKICAgICAgAAAA/ABEQTcwWAogICAgICAgANI=
			</data>
			<key>device_type</key>
			<string>ATY,DDParent</string>
			<key>model</key>
			<string>ATI Radeon 9250 PCI</string>
			<key>name</key>
			<string>ATY,BugsyParent</string>
		</dict>

Image_5.png

Image_1.png

Link to comment
Share on other sites

I agree. But there are 29 other cscXXX commands that might be needed to do the job.

We need to exchange informations about the commands.

Actually cscGetVideoParameters as well as cscGetNextResolution are the two needed to show available resolutions. Not all the cscXXX control commands are needed to switch resolution in my opinion, but I do not know much at the moment.

 

Sorry! I have Radeon9000 and my kext would work for it. For beyond it is a matter for discuss.

Radeon9000 is older than x800. For what I know, x600 and Radeon9600 are in the same serries.

 

I am not ready to answer your question. May be you publish your part of the codes where the value is calculated?

In my code, I'm still using the boot mode like what IONDRVFramebuffer do since I can not change resolution now. In the cscGetVideoParameters part, rowBytes need be returned.

					if (cachedMode->modeID == pixelParams->csDisplayModeID)
				{
					pixelInfo->vpBounds.left	= 0;
					pixelInfo->vpBounds.top	= 0;
					pixelInfo->vpBounds.right	= cachedMode->HDisplay;
					pixelInfo->vpBounds.bottom	= cachedMode->VDisplay;
					pixelInfo->vpRowBytes	= fBitsPerPixel * cachedMode->HDisplay / 8;
					pixelInfo->vpPlaneBytes	= 0;
					pixelInfo->vpPixelSize	= fBitsPerPixel;
					ret = kIOReturnSuccess;
				}

In the code, I expect it as "fBitsPerPixel * cachedMode->HDisplay / 8" and it is really the case for 1024x768 and 1280x1024 when I boot in these two modes. But when boot in 1400x1050, it's not correct. I print out the "fRowBytes" that is already there from boot mode, its value is 5632.

As mentioned above, not much or any radeon related code has been added yet, but still I post my whole source here in case people are interested in reading it.

RadeonX1000_listonly.zip

But my LCD has mode 1024x768 but not 1600x1200 so I got black screen.

So now I return to the problem of getting good EDID.

I have no EDID in BIOS. I need to get it from I2C but I have no good procedure reliable for my hardware. Searching...

I have no idea how to do this too, have to put EDID in the plist file like Callisto patch.

Link to comment
Share on other sites

I didn't get full analysis yet. Some informations:

as Dong get

cscGetVideoParameters as well as cscGetNextResolution are the two needed to show available resolutions

so cscGetCommunicationInfo give us possibility to switch on TVOut.

cscGetScaler, cscGetScalerInfo - for rotating (10.4.9 and up)

cscGetCurMode, cscGetDetailedTiming, cscGetTimingRanges, cscGetModeTiming - also for switch resolution

cscGetGamma - for color adjustments

cscGetMirror - for mirror

cscGetDDCBlock - for getting EDID

cscGetPowerState, cscSleepWake - for sleep function

 

Some of them was previously rewritten by Joblo. Other we must do else way.

Link to comment
Share on other sites

I'm not using Slice's source code, but rather directly based on apple's source. Actually Slice's kext is not for Radeon card beyond x800.

Also the i=4 loop is just for detailed timing modes which is the max number of this kind of modes in EDID. The 4 modes I observed is from established timing modes and standard timing modes.

 

After fixing the code, the two detailed timing modes: 1400x1050@60Hz and @50Hz are now listed in display preference. But I am still confused by one thing: the rowBytes gaven by boot mode of "1400x1050x32" is 5632, instead of my calculated value 5600 (1400 * 4). While for 1024x768x32 or 1280x1024x32, the two values are the same.

Where comes the extra 8 pixels in case of 1400x1050? Help is needed to clarify this.

 

After reading ole2's post on the whole divisible by 8 timing issue I tried mesing with your numbers.

 

1400x1050 is a 8:6 resolution ( 1400 / 1050 = 1.3333... )

 

1400 / 8 = 175 All good here

1050 / 8 = 131.25 This is a problem

 

 

now let's look at your horizontal pixel value of 1408. To get a matching 8:6 ratio we need a vertical resolution of 1056 pixels.

so we have 1408x1056

 

1408 / 8 = 176

1056 / 8 = 132

 

Voila, we have a slightly larger resolution that fits the aspect ratio and now fits with the divisible by 8 timing. So either 1400x1050 displays aren't really 1400x1050 pixels, or the driver is adding some padding to the values to make them compatible and the rest of the pixels are lost to overscan.

Link to comment
Share on other sites

1408 / 8 = 176

1056 / 8 = 132

Voila, we have a slightly larger resolution that fits the aspect ratio and now fits with the divisible by 8 timing. So either 1400x1050 displays aren't really 1400x1050 pixels, or the driver is adding some padding to the values to make them compatible and the rest of the pixels are lost to overscan.

 

Thanks for the analysis. It makes sense to me.

 

Edited:

Read some more documents. It turned out to be a Width vs. Pitch thing.

1400: display width, what we see. It is also the pixel numbers of a line of my LCD since it is the native resolution.

1408: surface pitch, the actual pixel numbers of a framebuffer line (corresponding video memory will be used to handle such a line), must dividable by 32 from ATI register document, seems can be any suitable value as far as enough video memory is available. The radeonHD driver makes it dividable by 256, thus at least be 1536.

 

The above analysis does not considering multiple screens, otherwise the pitch value is calculated from the whole expanded desktop width (virtual x in radeonHD driver), not from the primary display width only.

 

This link explains how to differentiate width from pitch: http://msdn.microsoft.com/en-us/library/bb206357(VS.85).aspx

Link to comment
Share on other sites

I made new investigations.

I found that IOGraphics project already contains readDDBlock to get EDID from monitor. But the method return kIOReturnUnsupported except AppleDisplay. :thanks_speechbubble: So in the method hasDDCConnect() I set return(true).

Next. Class IOFramebuffer has reading DDC by I2C as VESA standard while IONDRVFramebuffer - NO! Because it is supposed non-VESA monitor. I will suppose that my monitor is VESA compatible so I can call superclass.

Next. I found that IOI2CFamily.kext in systems 10.4.6 - 10.4.11 is PowerPC only i.e. non-working.

It is my question. Is it possible to use I2C bus on Intel computers?

I got sources from Apple and recompile its to Intel. But I still have I2C timeout. Any thoughts?

 

EDITED: I found Linux read-EDID sources

http://packages.ubuntu.com/ru/feisty/read-edid

and Window EDID viewer

http://www.eldim.fr/products/display-contr...lite-free-tools

 

 

This is corrected sources, compiled kext and I2C framework. :wacko:

Link to comment
Share on other sites

I made new investigations.

I found that IOGraphics project already contains readDDBlock to get EDID from monitor. But the method return kIOReturnUnsupported except AppleDisplay. :blink: So in the method hasDDCConnect() I set return(true).

Next. Class IOFramebuffer has reading DDC by I2C as VESA standard while IONDRVFramebuffer - NO! Because it is supposed non-VESA monitor. I will suppose that my monitor is VESA compatible so I can call superclass.

radeonHD cards use DDC2 channel, I don't know if it matters in IOFramebuffer/IOI2CFamily.

 

Worth researching on it, though the INT10 needs some library that may not available in apple. Linux radeonHD driver uses ATOMBIOS function in addition to i2c method to read EDID, I'm trying to port the ATOMBIOS part.

Link to comment
Share on other sites

radeonHD cards use DDC2 channel, I don't know if it matters in IOFramebuffer/IOI2CFamily.

Worth researching on it, though the INT10 needs some library that may not available in apple. Linux radeonHD driver uses ATOMBIOS function in addition to i2c method to read EDID, I'm trying to port the ATOMBIOS part.

May be you would be successful. I have non-ATOM but legacyBIOS, and I know that I have no EDID in my BIOS.

Link to comment
Share on other sites

New problems.

I checked different programs for Windows to get EDID. No success :blink:

WinI2C/DDC - no EDID

DumpEDID - failed to read

ELDIM EDID viewer - got 3 monitor infos but no dump

NIRSOFT MonitorInfo - got 8 monitor infos but no dump.

:unsure::poster_spam:

Link to comment
Share on other sites

I made new investigations.I found that IOGraphics project already contains readDDBlock to get EDID from monitor. But the method return kIOReturnUnsupported except AppleDisplay. :) So in the method hasDDCConnect() I set return(true).Next. Class IOFramebuffer has reading DDC by I2C as VESA standard while IONDRVFramebuffer - NO! Because it is supposed non-VESA monitor. I will suppose that my monitor is VESA compatible so I can call superclass.Next. I found that IOI2CFamily.kext in systems 10.4.6 - 10.4.11 is PowerPC only i.e. non-working.It is my question. Is it possible to use I2C bus on Intel computers?I got sources from Apple and recompile its to Intel. But I still have I2C timeout. Any thoughts?EDITED: I found Linux read-EDID sourceshttp://packages.ubuntu.com/ru/feisty/read-edidand Window EDID viewerhttp://www.eldim.fr/products/display-contr...lite-free-toolsThis is corrected sources, compiled kext and I2C framework. :)

 

http://www.paintyourdragon.com/uc/i2c/index.html

code for basic I2C over a DDC bus in osx, no idea if this will work on a hackintosh but it might help in figuring out how to read EDID info.

 

VESA open standards including EDID specs are available here:https://vesa.sharedwork.com/

(site doesn't work in firefox for me, safari is ok)

for email use: public@vesa.org

Password is: stds2007

Link to comment
Share on other sites

http://www.paintyourdragon.com/uc/i2c/index.html

code for basic I2C over a DDC bus in osx, no idea if this will work on a hackintosh but it might help in figuring out how to read EDID info.

Thank for very smily project! Really it is for native Mac

	/* Locate all available I2C buses */
if((dict = IOServiceMatching(kIOI2CInterfaceClassName)))
{

As I previously mentioned the interface isn't working.

Nontheless I got new information

	/* Addressing seems to work a little wonky in OSX; I2C addresses are
   supposed to be 7 bits, but it appears that the IOI2C* functions
   include the subsequent read/write bit within the address fields,
   making for an 8-bit value (address is shifted left by one).  But
   now, the Mindsensors servo controller claims to have a default
   address of 0xb0 (and even indicates such in its startup blink),
   which exceeds the 7-bit address range.  I believe the servo
   controller is treating the address and subsequent bit similarly.
   So the left-shift isn't performed if the high bit is set. */
request.sendAddress = request.replyAddress =
  (address & 0x80) ? address : (address << 1);

In IOGraphics there is address 0xa0. :)

but here

static const short
i2cAddress = 0x50,	   /* 1010AAA as per Microchip specs */

???????

This is OK?! 0x50

Link to comment
Share on other sites

on the following site of reading tool for DDC/CI, I found following note:

 

DDC/CI device seems is a client at I2C bus address 0x37 (0x6e for write transactions, 0x6f for read).

0x6e is itself, 0x37 shifted left for 1 bit, where right-most bit zeroed for i2C writing command, or set to 1 for i2C reading
Data being sent/received is formatted in frame, each frame has a fixed prefix byte, followed by length of the payload, payload itself and checksum made of XORing submitted/received I2C message. The initial XOR value depends on operation attempted, it's equal to I2C address submitted on the wire and seems to be fixed value for read transaction. For details see ddcci-tool sources.

 

and another note from same pase seems to be practical to take in account:

 

Once loaded it's a good idea to scan I2C busses using i2cdetect program (launch it with no args to see the list of I2C busses). Your monitor should answer at least at address 0x50 (EDID reading) and 0x37 (DDC/CI). The 0x37 is mandatory for runnig program. If it does not answer - try other busses as well. My Radeon exposes 4 busses - crt, vga, dvi and monid. It's therefore answers at 0xa0 at vga and dvi (as expected) and at 0x37 at vga bus only.

Link to comment
Share on other sites

Thank for very smily project! Really it is for native Mac

	/* Locate all available I2C buses */
if((dict = IOServiceMatching(kIOI2CInterfaceClassName)))
{

As I previously mentioned the interface isn't working.

isn't workign, as you having dict empty?

 

correct me, if I'm wrong: we could do this call from the UserSpace too?

 

how nVidia guys performing such call to get access to i2c of GPU?

Link to comment
Share on other sites

2 Dong

Only two month needed for me to make your RadeonPCI works :angel:

Thank you again! Now I can control Radeon registers in the new way.

0x0140: 32002002 1305657A 3FFF3800 43FF4000 0009000F 01FFFFFF 5060006A 3FFF3800

0x0160: 00000040 00000000 0000001B 00000606 3C000000 000E0000 306009E1 86868686

Here I make some improvements to format output.

void radeon_dump_io( void *map, CARD32 start, CARD32 end )
{
int i, j;
for (j = start; j <= end; j+=32)
{
	printf("0x%4.4X: ",j);
	for (i=0; i<31; i+=4)
	{
		CARD32 val = RegRead(map, (i + j));
		printf(" %8.8X", val);
	}
	printf("\n");
}
}

May be insert other function here? (DDC, OpenGL command :( )

 

Apple's SimpleUserClient works too... Dunno what is changed in my system. May be X11 libraries?

 

2 ole2

Thanks for new link.

How can you compile the tool?

ddcci-tool.c:34:27: error: linux/i2c-dev.h: No such file or directory

ddcci-tool.c: In function 'i2c_write':

ddcci-tool.c:102: error: storage size of 'msg_rdwr' isn't known

ddcci-tool.c:103: error: storage size of 'i2cmsg' isn't known

ddcci-tool.c:114: warning: implicit declaration of function 'ioctl'

ddcci-tool.c:114: error: 'I2C_RDWR' undeclared (first use in this function)

 

Other interesting link with i2c-dev.h

http://xgoat.com/wp/2007/11/11/using-i2c-f...space-in-linux/

I afraid it is far from our needs...

Link to comment
Share on other sites

here it is, how they getting access to i2C interfaces list:

 

int main( int argc, char * argv[] )
{
kern_return_t kr;
io_service_t  framebuffer, interface;
io_string_t	  path;
IOOptionBits  bus;
IOItemCount	  busCount;
Boolean	  save;

this part is getting access to IONDRV API. probably a NUB one, as it's a typedef

	framebuffer = CGDisplayIOServicePort(CGMainDisplayID());
{
	kr = IORegistryEntryGetPath(framebuffer, kIOServicePlane, path);
	assert( KERN_SUCCESS == kr );
	fprintf(stderr, "\n/* Using device: %s */\n", path);

this part retrieve list of i2C busses accesible in GPU

	kr = IOFBGetI2CInterfaceCount( framebuffer, &busCount );
assert( kIOReturnSuccess == kr );

for( bus = 0; bus < busCount; bus++ )
{
	IOI2CConnectRef  connect;

	fprintf(stderr, "/* Bus %ld: */\n", bus);

this part retreive handle (address???) to i2C host interface, probably a NUB one, same type as IONDRV interface

		kr = IOFBCopyI2CInterfaceForBus(framebuffer, bus, &interface);
	if( kIOReturnSuccess != kr)
	continue;

this part connecting to selected i2C interface

		kr = IOI2CInterfaceOpen( interface, kNilOptions, &connect );

	IOObjectRelease(interface);
	assert( kIOReturnSuccess == kr );
	if( kIOReturnSuccess != kr)
	continue;

	save = (argc > 1) && (argv[1][0] == 's');

this part passing connection reference to EDID reader/parser

		EDIDRead( connect, save );

	IOI2CInterfaceClose( connect, kNilOptions );
}
}

exit(0);
return(0);
}

 

the rest of the code is here

reference to this code is taken from nVidia EDID handling discussion MSG on forum

 

maybe, all we need, reuse already distributed IOGraphics package?

 

by the way, following components are also there:

 

IOBootFramebuffer.c

 

IONDRV.cpp

IONDRV.h

 

IOI2CInterface.cpp

 

etc.

Link to comment
Share on other sites

OK!

But the code will work if system will have I2C driver and I2C registry entries. Namely for that I recompile IOI2CFamily.kext and framework. I am not sure that it is enough.

Link to comment
Share on other sites

New achievements.

I2C works

ATIFB: aper_base: 38000000 MC_FB_LOC to: 3fff3800, MC_AGP_LOC to: 47ff4000
ATIFB: radeon_get_moninfo: bios 4 scratch = 1000004
ATIFB: Bios Connector table: 
ATIFB: Port0: DDCType-0x60, dac_type-1, tmds_type-0, connector_type-1
ATIFB: Port4: DDCType-0x0, dac_type--1, tmds_type--1, connector_type-7
ATIFB: Port5: DDCType-0x0, dac_type-1, tmds_type--1, connector_type-5
ATIFB: Port6: DDCType-0x0, dac_type-0, tmds_type-0, connector_type-0
ATIFB: found connection from BIOS:
ATIFB: Retreived PLL infos from BIOS
ATIFB: Reference=14.32 MHz (RefDiv=6) Memory=300.00 Mhz, System=165.00 MHz
ATIFB: PLL min 20000 max 35000
ATIFB: PLL found
ATIFB: pci registered
Starting monitor auto detection...
IONDRVFramebuffer::getEDID
IONDRVFramebuffer::createI2C
I2C created
  defaultI2CTiming
IONDRVFramebuffer::getEDID fromI2C OK
ATIFB: I2C (port 2) ... found LVDS panel
IONDRVFramebuffer::getEDID
IONDRVFramebuffer::createI2C
I2C created
  defaultI2CTiming
IONDRVFramebuffer::getEDID fromI2C OK
ATIFB: I2C (port 3) ... found LVDS panel
ATIFB: probe screen end 
ATIFB: biosEDID @0000

but the result

| | | | "I2C,EDID" =

00000000000000000000000000000000000000000000000000000000000000000000000000000000

00000000000000000000000000000000000000000000000000000000000000000000000000000000

0000000000000000>

In the attempt I change the address of I2C write

0x6e instead of 0xa0 as I saw in Joblo's project.

More investigations needed. I have no much time for that.

Link to comment
Share on other sites

We have 4 buses for I2C (4 ports, connectIndexes). But 8 connections?!

void IONDRVFramebuffer::setDDCData( IOIndex connectIndex, UInt32 value  )
{
val = INREG(rinfo->i2c[connectIndex].ddc_reg) & ~(VGA_DDC_DATA_OUT_EN);

/*There is connectIndex assumes to be 0..3
rinfo->i2c[0].ddc_reg	= GPIO_MONID;
rinfo->i2c[1].ddc_reg	= GPIO_DVI_DDC;
rinfo->i2c[2].ddc_reg	= GPIO_VGA_DDC;
rinfo->i2c[3].ddc_reg	= GPIO_CRT2_DDC;
*/

These are general Radeon registers 0x60, 0x64, 0x68, 0x6c.

In Apple's IONDRVSupport sources this method is empty. IOReturnUnsupported.

After the replacement we would expect I2C/DDC to be working... For me no :P

I still have no good connectionsInfo from BIOS (linux procedure is wrong) and can't get EDID neither from BIOS nor from I2C. Any other ideas or sources?

Dong, can you modify RadeonPCI to read I2C? Just copy I2C routines from RadeonFB and make Client "./RadeonDump -i bus,addr,length". Hex output to screen.

 

Bad, bad, bad...

I got black screen even if I comment out all RadeonFB additions. So the project is damaged at basis. :)

 

Return to initials. But all my corrections in Radeon's procedures are actual.

Link to comment
Share on other sites

 Share

×
×
  • Create New...